The Demented Ravings of Frank W. Zammetti Visit www.zammetti.com for all things me

27Aug/160

On the feasibility of traveling to Proxima Centauri B

So, as everyone has heard by now, scientists this week announced officially the discovery of a (maybe) Earth-like planet orbiting our nearest neighboring star, Proxima, which has been dubbed Proxima Centauri B (heretofore referred to as PCB). Aside from us Babylon 5 fans being excited as hell (Proxima colony?! The Centauri?!) everyone else was too because this planet looks to be in the Goldilocks zone of its host star, appears to be rocky and isn’t unfathomably bigger than Earth. In other words, so far (and it’s early so things could change) it appears to have a lot of the characteristics we expect of a planet that could support life.

But, the bigger reason for the excitement is simply because of its relative closeness to us. “WE COULD ACTUALLY GO THERE!” exclaimed the Internet in unison.

But could we? Could we really get there?

If you wanna TL;DR this thing, here it is: no, not with today’s technology, at least not in a time frame that would do us much good. But, that’s a pretty nebulous statement, isn’t it? Exactly how long WOULD it take? Exactly how long COULD it take it if we really pushed our technology in the near future? What IS a “useful” time frame anyway?

Well, I’m no scientist of course, but I do play one on the Internet so, thanks to the magic of Wolfram Alpha, lemme see if I can throw down some numbers and come up with something a little more concrete.

First, some basic facts that we’ll need, and note that I’m going to round everything here just to make the numbers a little cleaner, but they’re all very close to actual, certainly close enough for our purposes here:

• PCB lies 4.3 light-years from us, which is 25 trillion miles
• The speed of light (C) is of course 186,000 miles per second, which is about 670 million MPH (apologies to all my non-American friends, but I gotta calculate with what I know, and I don’t know kilometers – I was educated in the American educational system after all!)
• 1% of C is therefore 1,860 miles per second, or 6.7 million MPH (why this matters is something I’ll come back to shortly)

With those basic facts we can now do some pretty simple math (which I’ll probably screw up somehow none the less, but that’s what comments are for!). First, let’s use that 1% figure I mentioned. Traveling at 6.7 million MPH, how long does it take to reach PCB? Well, that’s just the distance to PCB, 25 trillion miles, divided by 1% C (6.7 million MPH), which yields 3.7 million, and that’s in hours. Ok, so, how many days is that? Just divide by 24 obviously and we get 155,000 days. And how many years is that? Divide by 365 (screw you leap years!) and we find it’s 425 years.

Ok, that’s a long time for a little trip around the galactic neighborhood. Never underestimate how mind-bogglingly big space is!

Even at 1% C that’s not really feasible for us to do, even if technologically we could achieve it. But, that depends on what “feasible” means, doesn’t it? Well, beyond the technology, I would suggest that “feasible” refers to being able to get there, one way, in a single human lifetime. That’s really what most of us want after all, right? We want someone to go there and be able to report back what they find. We could of course send automated probes, and logically that’s what we’ll wind up doing at some point (whether this planet or some other yet to be discovered). But, a probe isn’t as good as a person to most people. It certainly doesn’t capture the imagination quite like humans going does. Our rovers on Mars continue to capture my imagination for sure, but I’m big-time itchin’ for someone to step foot on the red planet and use the human capacity for poetry to describe it to me (and I’d go in a heartbeat myself and cut out the middle man entirely!)

That’s the baseline assumption here: we want to be able to get there in a single human lifetime. In order to make this easier, let’s be a bit cold about it and say that we don’t care about coming back, nor of our intrepid explorers being able to survive there for very long, and “lifetime” we’ll define as a long human life. So, given that, what kind of speed do we need to be able to accomplish? Well, now you know why I did the 1% of light speed calculation: it makes it easy to extrapolate out! We just divide 425 by the percent of C we’re going to travel to see how many years it would take at each level. And, doing so yields the following table:

• 2% = 212
• 3% = 142
• 4% = 106
• 5% = 85
• 6% = 71
• 7% = 61
• 8% = 53
• 9% = 47
• 10% = 43

Given that the current average human lifespan across the entire planet is around 71 years, we probably don’t want to send anyone that will be older than that upon arrival to have the best shot of, you know, everyone actually being ALIVE by the end! That means that we can’t send anyone older than 18 (53+18=71). That’s probably too young though, we probably want people in their 20’s realistically so as to have enough time for training (not to mention someone theoretically old enough to make the choice to go and understand the ramifications). That means no older than 24 at 9% C and we can go up to 28 at 10%. So the sweet spot is probably right in there.
So, to put it plainly: we probably need to go 9% or 10% C and send no one older than 28 to have any chance of this working. Let’s call it the 10/28 rule ?
But, how realistic is 10% of C for us? Well, NASA’s Juno probe is recognized as the fastest man-made object, attaining a top speed of 165,000MPH. That works out to about .02% C. So, it’s 50 times too slow.

D’oh 🙁

And that doesn’t even account for acceleration and slowing down, things which are inherently more important for us squishy meatbags than our hardened technological creations.
This all tells us that we’re probably not doing this with current technology, not if we want to send people. In fact, it doesn’t seem even remotely possible if we just wanted to send just a probe. Even though a probe allows us to accelerate and decelerate much faster, making the overall average speed of the trip better, the bottom line is we still can't touch these speeds whether there's people on-board or not. I don’t honestly know how much faster than Juno we could pull off today, but I’d be willing to bet even 1% C would be a (likely impossibly) tall order. Therefore, we have to look to theoretical propulsion if we want to make this trip in any fashion. That’s always tricky of course because being theoretical, quite obviously, means that they aren’t real yet, and might never be. We don’t know FOR SURE they’ll work or what kinds of problems we might encounter even if they do.

However, the thing that’s encouraging is that there are quite a few theoretically propulsion systems on the drawing boards that aren’t what would be considered by most to be fantastical. We’re not talking warp drives or wormholes or anything like that. No, we’re talking things like nuclear propulsion, plasma drives, solar sails, fusion, things that we either know work and just need to be scaled up (which makes it a pure engineering challenge) or things that we have every reason to believe is feasible given some time to develop (fusion). If we said, for example, that we have 100 years starting from today to create the technology needed get to PCB, that wouldn’t seem impossibly far-fetched. Highly optimistic, yes, and we almost certainly wouldn’t be talking about sending people, but it wouldn’t sound utterly ridiculous. And you'll note that 100 years isn't too much longer than a typical human lifetime - it's right around 1.5 human lifetimes in fact. That, in theory, puts this within the reach of the unborn children of someone alive today. That's not so bad!

So, in summary: the bad news is that we’re not going to PCB, whether people or just technology, with what we have now. We’re simply too slow at the moment. The good news however is that we do have some ideas currently being developed that might have a shot at doing it in a generation or two AND getting there in a semi-reasonable time frame AND which aren’t pure science fiction. There’s a ton of if’s and’s and but’s that you have to qualify that optimism with, but it’s probably not an outright impossibility in the foreseeable future. That’s pretty exciting!

Speaking as someone who would sacrifice arbitrary body parts to be able to see this planet in his lifetime whether with his own eyes or through technology, I’m depressed. My children have an outside shot... maybe... and my grandchildren a better shot still, but that doesn’t help me 🙁

I guess I’ll just go buy a PS/4 and play No Man’s Sky. Seems that’s as close as any of us alive today is ever going to get.

Filed under: Uncategorized No Comments
29Jul/160

Building web UIs: an unconventional viewpoint (now with surprising supporting evidence?)

For a long time I've been becoming more and more convinced that the approach to developing complex web-based SPA UIs is going in the wrong direction. There is now an incredibly heavy dependency on CSS and markup and it's made modern web development a lot more complicated than it's ever been before (and that's not even talking about all the tooling typically involved). I personally find understanding a UI holistically to be more difficult when it's mixing complex CSS and markup (and then usually some JS too). I know some people love that and think it's the Best Thing Ever(tm) but I respectfully disagree. I think the development of responsive design has been a big driver of this, though not exclusively. You can of course do responsive without CSS, but I suspect it's largely been a case of "this is the first way someone figured out how to do it and now everyone thinks its the right answer" without really considering alternatives very much. Just my opinion of course.

What's interesting to me is that I've spent a few years developing highly complex UIs, SPAs of course, that have taken a different approach: a code-centric approach. It's more like... dare I say it... Java Swing development then web development in a sense. Rather than creating a basic layout in HTML and then styling it with CSS and applying JS for interactivity I've instead done most of my work by writing JS that generates the UI for me. CSS becomes more like it's original intent: just very basic look-and-feel styling. Things like animations and such aren't done in CSS (at least not at the level of me developing the UI). A lot of this work has been done using ExtJS, but not exclusively, so obviously there's a lot of markup and CSS and JS going on at a level below my work, but that's irrelevant to my main point, which is that it's building a UI through code rather than markup and styling directly.

Of course, I've done my fair share of more "typical" UI development at the same time so I've had a great chance to compare and contrast the approaches. I've always felt that the code-centric approach just made more sense. It always seems to me to be easier to understand what's going on (assuming the library used is expressive enough, as I find ExtJS to be). Things like responsive design are pretty easy to achieve with this approach too because now you've got JS making the breakpoint decisions and elements and all of that and that too just works better with my mind than does the HTML/CSS approach.

Of course, a code-centric approach isn't without it's problems. For one, it obviously requires JS be available. However, I frankly no longer see this as a disadvantage. I think JS has become one of the three key pillars of the web (along with HTML and CSS of course) and it's no less natural to expect JS to be available than to expect HTML and CSS to be available. If a user chooses to turn JS off (as I sometimes do) then woe be unto them if things don't work. I'm no longer interested in graceful degradation, not as far as JS goes anyway (and with a code-centric approach, any other GD concern largely goes away as well).

Another pitfall is that the code-centric approach can also be more verbose and overall requires more "stuff" be written. This, again, is something I don't necessarily see as a problem though. There's a drive towards terseness in programming generally today that I find to be counterproductive. Verbosity isn't the measure that matters really, it's expressiveness. Sometimes, short can be very expressive for sure, but often times I find short is the result of laziness more than anything else (programmers REALLY seem to hate typing these days!) and the result is code that's harder to read and comprehend in my opinion. There are of course limits: I'm not a fan of Objective-C and its long names, but one could hardly accuse it of not being expressive.

All of this is highly debatable and I'd be the first to admit it, but it's my blog so let's go with it 🙂 Anyway, I'm not here trying to convince you that I know the One True Right Way(tm) to develop modern webapps or that you're doing it wrong. Philosophical musings are fun, but it's not my purpose here. No, this article is about one specific problem of the code-centric approach that I didn't mention yet.

Or, at least, something I EXPECTED would be a problem: performance.

Let's jump in and look at a bit of code, shall we?

<html>
  <head>
  <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/>
  <meta content="utf-8" http-equiv="encoding"/>
  <script>

    var startTime = new Date().getTime();

    function calcTime() {
      var endTime = new Date().getTime();
      console.log("browser", (endTime - startTime));
    }

  </script>
  </head>
  <body onLoad="calcTime();">
  <form>
    Username: <input type="text"/>
    <br />
    Password: <input type="text"/>
    <br />
    <input type="button" value="Logon"/>
  </form></body>
</html>

It's not getting much simpler than that! One would expect that runs quite fast, and indeed it does: I sat there reloading it off the file system a bunch of times in Firefox (47.0.1) and on my machine (Surface Pro 3 i5 256Gb) it averages around 100ms to display the elapsed time in the console. It fluctuates a fair bit: I saw as low as about 70ms and as high as about 200ms. I'm frankly not certain what the discrepancies are all about, but ultimately it doesn't much matter because they were outliers anyway and we're looking at rough averages here.

Now, let's look at a code-centric approach version of that same example:

<html>
  <head>
  <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/>
  <meta content="utf-8" http-equiv="encoding"/>
  <script>

    var startTime = new Date().getTime();

    function buildUI() {

      var f = document. createElement("form");
      f.appendChild(document.createTextNode("Username: "));
      var i1 = document.createElement("input");
      i1. setAttribute("type", "text");
      f.appendChild(i1);
      f.appendChild(document.createElement("br"));
      f.appendChild(document.createTextNode("Password: "));
      var i2 = document.createElement("input");
      i2.setAttribute("type", "text");
      f.appendChild(i2);
      f.appendChild(document.createElement("br"));
      var b = document.createElement("input");
      b.setAttribute("type", "button");
      b.setAttribute("value", "Logon");
      f.appendChild(b);
      document.body.appendChild(f);

    }

    function calcTime() {
      var endTime = new Date().getTime();
      console.log("js", (endTime - startTime));
    }

  </script>
  </head>
  <body onLoad="buildUI();calcTime();">
  </body>
</html>

Note the paradigm employed here: there is no content on the page initially, it's all built up in code onLoad (no sense doing it sooner since there's no DOM to build up after all!). The loaded page you can think of as a blank canvas that gets drawn on by the code rather than the completed painting that the browser automatically displays as a result of parsing the document. Obviously, the onLoad event is going to fire here sooner than the plain HTML version because there's nothing to render.

But, what's your expectation of how this version compares to the plain HTML version? That code in buildUI() has to do its work, building up the DOM, and then the browser has to render it when it's appended, right? Does your gut say that this will take longer to complete because surely the browser can render static HTML in the body loaded as part of the document faster than it can process JS, build the DOM and render it under the control of JS?

That certainly was my expectation.

BUT THAT DOES NOT APPEAR TO BE THE CASE!

Doing the same sort of "just sit there hitting reload a bunch of times" dirty timing, I got an average of around 60ms (and again, there were a few outliers in either direction, around the same number of them as the plain HTML version, so my guess is those are simple IO delays and in any case we can ignore them in favor of the rough average again).

I mean, it's not just me, right? That's kinda crazy, no?! And, that's not even the most efficient way one could write that code, so there's probably a chance to increase that performance. But what's the point? It's already faster than the plain HTML version!

Now, to be fair, I can't necessarily extrapolate this out to a much more complex UI and assume the same would hold true. That's frankly a test I still need to do, but it's also a much harder test to set up properly. But, just seeing this simple test is frankly eye-opening to me. I always assumed that my favored code-centric approach was sacrificing speed in the name of other benefits, but that may not actually be true.

So, is it just me? Is this common knowledge and it's something I just never realized? Or, is it pretty surprising to you like it is to me? Or, did I just screw up the analysis somehow? That's always a possibility of course 🙂 But, I think I got it right, and I'm surprised (err, by the results, not that I think I got it right! LOL)

Filed under: Uncategorized No Comments
13Oct/15Off

Did I just figure out how aliens will communicate with us?

So, you've heard about the weird "stuff" that's been found around star KIC 8462852, right? If not, here's a link:

http://www.theatlantic.com/science/archive/2015/10/the-most-interesting-star-in-our-galaxy/410023/

In short, there's something orbiting it... a lot of somethings more accurately... that we've never seen before. Now, before you post your Giorgio Tsoukalos does his "I'm not saying it's aliens - BUT IT'S ALIENS!" memes all over the place, there's a number of rather mundane explanations... though here, "mundane" only has meaning relative to the notion that it could be aliens. Planet collisions and things of that nature are a distinct possibility (and probably most likely).

But, one scientist, Jason Wright, an astronomer from Penn State University, made some interesting comments that the light pattern we see resulting from these whatever-they-are's kinda looks like the type of thing we might expect an alien civilization to produce as a result of some mega-big constructions (think Dyson Spheres, Ringworlds and that sort of "OMG THAT CAN'T REALLY EXIST!" sorts of big-ass things).

Now, if you ask me right now to place a bet I'll give you whatever odds you want that this IS NOT aliens. It's probably still going to be something very cool because it's not like anything we've seen before, but almost certainly it's natural.

But, that got me to thinking... or more precisely, the comments of someone on Reddit discussing this did... he asked about radio waves. He thought that particles (dust and such) distributed through interstellar space would reduce radio waves to a point that they were entirely undetectable before long. Seems logical, right? Ok, so, if that's a given, an advanced alien species that could build unbelievably massive things in space wouldn't be using radio to communicate. What WOULD they use? LASERs? Maybe, but they're too precise- they have to be aimed at the receiver pretty accurately and if you don't know where it is you waste a lot of time and energy beaming out in all directions.

In fact, this is the basic problem with most things that (a) will still be detectable at a great distance and (b) which you intend to not destroy any life on the receiving end (I mean, maybe gamma ray bursts are an alien civilization's calling card... but if so, they're VERY impolite since they can destroy all life on planets if they're too close).

Then, it kind of dawned on me how I'd pull it off: "reverse" Morse code!

Let me explain... imagine you could build massive structures around a star... imagine you could get the to align very precisely and orbit the star very precisely... now imagine that another civilization could detect the drop in light coming from the star and even could do so over time, yielding a pattern to the drops in light.

You could encode a message doing that in much the same way you type out a message in Morse code, but it would be a "negative image", so to speak: the DROPS in light are where the message is encoded, NOT the light itself (in Morse code terms: the message would be encoded in the silences between dots and dashes, not the dots and dashes themselves).

I mean, think about it: even to an advanced civilization it's very likely that a star is still many orders of magnitude more energy that they can themselves produce. Given that, why try to produce energy in ANY form, encode information into it and beam it out into space when you ALREADY HAVE SOMETHING DOING IT, and doing it much better than you can? All you need to do is figure out how to encode information into the star's emissions. That's where Reverse Morse code comes into play.

Now, I don't really know if this is an original idea or not. I do know that I can't recall ever having seen it anywhere before, and I also know that Googling "reverse Morse code" does not yield any result that describes what I am here in any context. I'm actually sitting here wondering if I just came up with something really brilliant or if it's an old idea I've just never heard of. I'm certainly betting on the later, especially given all the phenomenally smart sci-fi authors out there that think about this stuff all the time.

But, on the off chance this is an original idea, I wanted to make sure I got it written down somewhere because original or not, it really DOES seem like the way advanced aliens would try to communicate with us, especially if we assume that the speed of light is the same impediment that it is for us and that no "work-around" exist or are exploitable. Certainly no technology we know of or can realistically imagine building would allow for communication across the types of distances we're talking about. Using the power of a star would cut out one of the biggest problems you have with communicating across great distances.

The question I have and can't answer is exactly how precisely we can measure transit events right now. Do we have the technology to discern a pattern like I'm describing, whether we can decode it or not? I don't THINK we do yet, I think we're just at the point of "this much light drops out in total so we know something's there" and that's it. But then again, I'm not sure I understand how we could know there's "stuff" around this particular star then. Don't we need to be resolving at a somewhat finer level than "total light lost" to be able to say that?

I'm no astronomer, astrophysicist or anything like that. I'm just a guy that thinks about science stuff A LOT. So it's far more likely this is just a neat idea for a sci-fi short story 🙂 But, I've asked a few real scientists to have a look and give me their take on the idea... I'll let you know what I hear!

--------------------

EDIT: The following was added on 10/15, after the original post:

So, I tweeted Phil Plait, the famed "Bad Astronomer" himself and got a reply pretty quickly (thank you again Phil VERY much for taking the time to read my blathering!)... I'll quote his reply here:

"Fun idea. But radio waves travel for a LONG way, and are a lot easier. :)"

So there you have it: probably not what I said. Unsurprisingly. But, the fact that he didn't poo-poo the idea outright makes me feel pretty good anyway 🙂 I'm certainly not about to argue with a trained, professional astronomer... though I do feel the need to point out that I've read a number of sources that say radio waves don't propagate quite as well or as far as Phil seems to indicate... still, I don't doubt he's right: radio is simply a lot easier than building some unimaginably huge structure around a star regardless of how advanced a civilization we're talking about. It's probably not a trivial exercise for ANYONE and so you still have to imagine it's pretty unlikely to be the case. Fair enough.

Interestingly, Phil sent another tweet my way a little after that first one:

"Well now, I spoke too soon. 🙂 http://iopscience.iop.org/article/10.1086/430437/pdf"

That's a paper entitled "TRANSIT LIGHT-CURVE SIGNATURES OF ARTIFICIAL OBJECTS". So, you know, real scientists are out there thinking about what I had, which is cool! It also means my idea isn't original, but I already said it almost certainly wasn't (although that paper doesn't discuss it as a communication mechanism, I'm just going to assume that came up between the authors, or someone else somewhere came up with it too).

The deal with this particular star though is still something really interesting to say the least because the luminosity drop observed is something like 22%. That's A LOT! A Jupiter-sized planet transiting a star general results in a drop of around only 1%. Obviously, that's a huge difference. That makes this even more interesting if you ask me (and you're here, so I'll just go ahead and assume you did). The most likely explanation still seems to be things like cometary collision, but even for that it's kind of hard to imagine a debris cloud that large.

One thing that I see some people say when they see that 22% figure is "OMG, an object that occludes 22% of the light from this star! That's amazing!" Well, sure, it WOULD be, granted, but that's also not necessary. It doesn't have to be one single coherent object- a debris cloud that occludes 22% of the light IN AGGREGATE does the trick just as well and is a lot easier to contemplate, hence the cometary collision idea being what I'm seeing most astronomers say is the most likely scenario. Even still, it's hard to imagine a debris cloud larger than Jupiter by 22%... something tells me that's not the whole story.

So, really, at the end of this day, what we have here is something REALLY weird. Something we haven't seen before. Something that has at least some characteristics of what we think we might see from an artificial structure. But, we've got a couple of far more likely explanations that are all-natural, and all-natural is good in food so it's probably good in astronomy too. However, with all of that said, astronomers are now working pretty quickly to get additional observations off the ground, if you will (I believe I've read a January timeframe is the target). There's a lot of excitement over this, understandably so. It's either artificial and therefore something game-changing, or it's natural and almost as game-changing... or it's got a totally mundane explanation that we've so far missed of course, that's always a possibility. But right now, it really does look like a big deal whether it's aliens or not.

As for it being a communication mechanism like I proposed? Eh, I suppose it still COULD be, but it was always a pretty unlikely idea. But, since we're already talking about an unlikely idea, let me put on my sci-fi writer hat and make it even MORE unlikely, but still not ENTIRELY implausible...

Imagine you are an advanced alien civilization. You decide that building a giant screen in front of a star to send what amounts to a form of Morse code is a good idea because that energy will propagate through space better than any energy source you can produce yourself. Now, we have to assume such a civilization has the same fundamental limitations as us, which means no faster-than-light travel, which means they're just as confined to a single star system (or a small handful) as we are. That also means that two-way communication isn't possible for the same fundamental reasons.

Or is it?

You've heard of quantum entanglement, right? The idea that two photons, separated by an arbitrary distance, can instantaneously reflect changes in quantum state. This is interesting because what appears to happen is essentially an exchange of quantum "information" at superluminal speeds. This is a real phenomenon, not a theory. It's something scientists can experimentally observe in real life. Einstein called it "spooky action at a distance", and that's an apt description!

Now, before we get too excited, scientists have already ruled out the possibility of using this as a form of communication. That would necessitate violating causality, something we don't believe can happen. It DOES have some potential uses in communication anyway: there's some ideas about using it to ensure a message sent isn't tampered with in transit, but that's a very different, and much simpler, thing.

But... since we're playing with ideas here... what it we're wrong? What if it IS possible to use quantum entanglement to communicate? Let's assume that for a moment. What might our advanced civilization do with that knowledge if they had it? Well, I'll tell you what *I* would do in their shoes: you know that starlight I'm already playing with to send Morse code-like messages? Maybe I can quantumly entangle them on the way out from the source star. If I could, I might also encode in the message itself information on how to build a transceiver.

If you recognize that idea it's because it essentially is the story from Carl Sagan's Contact. We're not talking about building a wormhole transit machine here of course, because that's much harder (I presume). But a radio that can use quantumly entangled photons? Maybe not so hard (if, once again, physics allows it at all). The medium of the message, the starlight, would essentially include the information necessary to allow two-way communications.

I think the idea has a certain logic to it... I beam my simple reverse-Morse code message out everywhere in all directions using my local star are the power source... the breaks in the starlight provide the message, including the plans on how to build a device to allow two-way communication using the quantumly-entangled starlight that the receiver already has access to. All of a sudden, assuming a civilization is advanced enough to receive the message, decode it and follow the instructions to build the device, and are willing to do all of that, then the originator can have two-way real-time communication with them!

It's a nice idea, though it relies on breaking what we thank are some fundamental laws of quantum mechanics... which just so happens to be our most successful, well-tested and verified theory to date and which is responsible for most of our modern world. However, even if that theory isn't as immutable as it appears now, there's still the not at all insignificant problem of time: the original signal that tells us how to communicate with the advanced civilization will still take thousands of years or more to reach us. There's simply no way to know if there's anyone on the other end to talk to anymore. I haven't conjured up a way around that problem yet, even a ridiculous one. That seems like the truly fundamental stumbling block to this whole communication idea.

Still, one can dream - and, until we know for sure what's actually there around KIC 8462852, that's exactly what I'm prepared to keep doing 🙂

4Sep/15Off

The problem with Software as a Service subscriptions

We are, without question, in the Software as a Service (SaaS) era. Perhaps just at the start of it, but we’re in it for sure. There’s a new economy emerging and it’s one that has considerable pluses and minuses for all parties involved. This is the era of software subscriptions and it’s an ever-expanding era.

To be sure, SaaS doesn’t AUTOMATICALLY mean subscriptions. You can certainly offer SaaS without the need to subscribe to it. But that’s not what we’re here to talk about today. We’re discussing the subscriptions aspect of SaaS specifically, so when I say SaaS going forward understand that I’m talking about subscriptions.

I don’t know who actually started it, but everyone is doing it nowadays: Adobe, Microsoft, Jetbrains, IDM, the list goes on and on. As a customer you are now frequently asked to not actually purchase a piece of software outright but instead to purchase the RIGHT to use it for some period of time. Some will argue that software has ALWAYS been that way: EULAs for a long time have basically been grants to use software, not a transfer of ownership. However, that’s largely a matter of semantics. The fact is that when you purchase a boxed piece of software (or download it) and you aren’t paying a subscription then you do, for all practical purposes, “own” that software regardless of what the EULA says.

It’s quite easy to understand why subscriptions are a Very Good Thing™ from the vendor’s point of view: this is guaranteed income! It’s a roughly static revenue stream (well, you HOPE it’s NOT static and always grows, but it’s static in that once someone signs up you know how much income you’re getting from them and for what period of time). It also has the benefit that they can support a single version of an application. That reduces their effort and cost thereby increasing their income even further. They can also shut you off at any time if the need should arise for any reason and makes software piracy much easier to stop (theoretically impossible). In other words: it makes tremendous sense for a vendor in many ways.

But does it make as much sense for consumers?

Well, there’s some clear benefits of course. First, you always have the latest and (in theory at least) greatest version of an application. We all love hitting that update button on our phones and seeing new features in apps we use and the same is generally true for the software we use on our PCs, MACs and other desktops too. It’s also less effort: updates are either pushed to us transparently or we’re alerted to an update and encouraged to accept it. It’s all very convenient and kinda fun to get the New Hotness every so often!

We also know what our expense is up-front. There’s no more deciding whether to purchase a new version, whose price may have increased, because you’ve already essentially paid for it with the subscription and you knew up-front how much the cost was. And, because you’re paying at some regular interval, the cost is spread out and well-defined each time. It’s almost like paying for your software on credit in a sense and that ability to “budget” the cost is attractive to a lot of people, more so than a large lump sum payment is (hey, I’m generally in that boat for sure!)

But, there’s also significant down sides.

Perhaps the biggest is the potential for breaking changes. If the vendor pushes an update that breaks something, especially if it’s something you use all the time, that’s a bad day for you. And, when those updates aren’t by choice and you can’t roll back, you’re in an even worse situation. This is of course the big complaint against Microsoft’s policy with Windows 10, and it’s starting to look like they may acquiesce just a little on this point. Also, as is the case with Windows 10 updates, the vendor is under no obligation to even communicate to you what the changes are! You just have to trust that they (a) are delivering things properly and (b) are delivering things you actually WANT.
It’s not just about breakages either: a vendor can arbitrarily make changes - just because! Remember with Microsoft introduced the Ribbon to Office? Many people positively HATED it. So, what was the alternative? Simple: don’t buy the newer version that has it and continue to use the version you’ve already paid for. If you have a more or less steady base of subscribers then you don’t have the benefit of tracking sales over time to determine if the changes you made were well-received or not. Sure, there’s other ways to illicit feedback from customers, but why would you bother? You already KNOW what’s best for them or you wouldn’t have put the change in there to begin with! The only feedback that really matters in such case is lost sales, and that doesn’t happen with subscriptions.

If that had happened during this SaaS era though, you’d have had no choice. It would have been Ribbons for everyone, like it or lump it!

Another problem is around connectivity, though I personally consider this to be smaller issue. SaaS of course requires some sort of connectivity to work in nearly all cases. Sometimes it’s not bad at all: the software may ping back to the Mother Ship every few days, weeks or even months to confirm it’s still okay to run. That’s probably not too big a deal. But if the software runs in the cloud of course then you need a constant connection. In either case though, the issue is that there’s a potential that your software will just not work one day. If you have a report to write for work and all of a sudden Word won’t run because it can’t validate your subscription due to a network outage down the road, well, that’s just too bad for you. As I said though, I view this issue as somewhat of a minor one though because vendors usually have some allowances for this sort of thing that will avoid such nasty scenarios most of the time. Still, the fact that they’re even possible is something to think about.

There’s also a problem of rising costs. While it’s certainly true that a vendor can raise the price of their products at any time regardless of how they’re delivered, there’s a fundamental difference with a subscription in that it’s almost like an addiction: you’re accustomed to having the latest version, you’re work DEPENDS on having the latest version maybe (think of things like file format changes) so you almost have no choice but to just pay the new higher price. Remember too: if you decide not to pay, you can’t just freeze on a current version and continue using it. No, you’re cut off entirely. You either pay or you do without. If it was straight-up purchased software then your choice is simple: don’t buy the new version at the higher price and continue to use what you’ve ALREADY purchased outright. No such choice (usually) exists with a SaaS though.

I’ve touched upon it a couple of times already but now let’s talk directly about a more fundamental problem lurking in this model: for nearly all SaaS subscriptions, if you no longer pay, you no longer have software to use. Period! This is vastly different than purchasing software directly. On the shelf in my office I have a disc with Microsoft Office 2000 on it. Sure, it’s a very old version now, but you know what? It still works! I could load that up and use it right now without any problem because I purchased it outright. I may not WANT to use such an old version, but I at least have that option.

Not so with SaaS. You either pay to use the current version, which is usually the only version you CAN use, or you don’t get access to anything at all (there may be a few subscriptions that DO in fact allow you to use older versions, perhaps even after your subscription has lapsed, but they would certainly be the exception that proves the rule).

But, the problems I’ve described thus far as the pretty obvious ones that everyone realizes. I’m not breaking any new ground here. But, there’s another somewhat more insidious and less obvious problem with SaaS and it may just be the biggest: the vendor no longer has any real reason to innovate. There’s far less incentive for them to improve the product at all.

Think about it: when you have a subscription, you’re in a sense paying for what you already have. Sure, Photoshop may get some new features, but it’s the same essential product. What might Adobe do? Well, they may think: “Gee, people are paying for Photoshop regardless of what we actually do with it… that’s guaranteed income… why don’t we re-allocate our Photoshop development resources to a new product that we can later get people hooked with a subscription on? That’s increased revenue!” You really only have the good will of the vendor to improve the product because there’s no longer a financial incentive to do so and there may in fact be MORE incentive to NOT do so.

All a vendor really has to do is put out a small handful of bug fixes every so often and that’s enough for them to be able to claim that they’re upholding their end of the subscription bargain.

When you purchase a software subscription, you’re basically paying for something sight unseen. You don’t know what’s going to come down the pipe later, but you’re sure as hell paying for it either way. There’s no real obligation on the part of the vendor to even deliver anything more. Oh, you might get a new killer feature that becomes a truly must-have sometimes, but you don’t KNOW that when you click the “Pay Now” button. That’s the gamble you make with SaaS.

And yes, before anyone brings it up: you can usually cancel a subscription any time you want. Except, that misses two truths. One is that in at least some cases you really can’t… if you’re paying monthly then you can maybe argue it’s true, but if you paid yearly? Do you know many SaaS vendors who will refund you the portion of the fee you paid for the nine months you’re now cancelling? That’s true even on a monthly basis, but the damage is considerably less, so people tend not to notice as much. I’m sure there are some cool vendors out there that WILL do that, but again, they’re exceptions. Second, there’s that whole “addicted” aspect of this that is no small matter. Do you really want to go cold turkey and not have access to the software anymore? CAN you do that? If your daily work depends on Office 365 then cancelling, no matter how cool the vendor is about it, isn’t going to fly because you won’t have access to ANY version of the software anymore. That makes cancelling pretty much a non-starter.

So, what’s the bottom line here? Well, if vendors do the right thing then it’s probably that all the positives outweigh the negatives. If they are quality-oriented then they won’t deliver broken updates. If they care about the customers and their products then they’ll deliver new features that people actually want. If they listen to their customers and illicit feedback regularly then they’ll be able to accomplish those goals. As customers, we’ll have the benefit of the latest and greatest at a known and essentially fixed cost. That’s all good stuff.
If they’re bad vendors though? Ugh, we may be in for very hard times.

People need software to work whether professionally or just as a hobby. There’s not really an option of NOT getting into subscriptions at some point. Eventually, that old version of Office 2010 simply won’t run anymore, and I won’t want to be bothered getting an old Windows license to run in a virtual machine. And when I go to buy a new version that will work? Well, SaaS will be my only choice. Subscriptions are only increasing and it probably won’t be long before they are the norm in the software world. Open-source will still be a choice of course, that seems unlikely to ever go full-blown SaaS, and maybe at some point we’ll see a spike in its usage beyond the techie crowd (and beyond a few more mainstream apps). Frankly though, if that hasn’t really happened yet I doubt SaaS is going to change that.

The more likely scenario is people will just get used to the SaaS model. They’ll deal with the headaches because they’ll be sold on the benefits, whether they ACTUALLY outweigh the negatives or not. It all comes down to marketing, doesn’t it?

But, there’s no question in my mind that we’re starting down a path of leaving behind a model that worked for many years. Maybe this is just changing times and we’ll all have to adapt, or maybe it’s a mistake. It’s certainly not a mistake for the vendors, that much is clear. Do customers lose out as a result? There’s certainly cause for concern I’d say.

Now, if you’ll excuse me, I have to go renew my Office 365 subscription 🙂

18Jan/15Off

Building a homemade Dropbox/Time Machine thingy for vanilla file systems

So, I've had a dream for some time... but before I get to that, let me give you some necessary background...

I've got a server at home, and on that server are various directories (as servers tend to have!). One of them is a directory full of assorted documents. These documents are edited by myself and other members of my house. Documents are created, deleted, sub directories created and deleted, things renamed, all the typical file system activities.

Now, my server is well-protected in terms of backups: the server has a nice RAID array in it for starters, with real-time monitoring and reporting rolled into it. I've got Mozy running, so I have off-site backups covered. Plus, Mozy can do backups to an external hard drive at the same time, so I do that as well for an added layer of security. On top of all of that, every two weeks I image the entire data array to a different external hard drive. If that wasn't enough: every month I burn the most critical stuff to a couple of Blu-Ray discs, and once or twice a year I ship a copy off to my mom in another state.

In other words:

  • I'm incredibly anal about data retention (yes, I've been burned before)
  • I'm really not very likely to lose anything of value off that server

But, here's the thing: all that backup stuff doesn't do one significant thing for me and that's versioning. That's where that dream I mentioned comes into play. You see, I've also got a Subversion server running. What I'd really love to have happen is for at least certain directories on the server be checked into Subversion. That's easy to do of course, but the problem arises when you try and tell your wife or kids how to work with version control. When you aren't used to that thought process it can be pretty foreign. Besides, even if you're used to it, do you really want to have to worry about checking in changes every time you edit a file? Because even I don't really want to.

So... wouldn't it be great if there was a way I could have the server monitor the directory for changes, and when any are seen, automatically check the changes into Subversion?

Yes, it would be great... no, check that: it IS great, because I BUILT IT!

The title of this post is courtesy of a Twitter buddy of mine, Nick Carter (@thynctank). When I was explaining what I was doing, that was how he described it back to me and when I thought about it, it really was a very accurate description!

Ok, fine, preliminaries done, let's get to brass tacks! How did I pull this off?

First, let me say that this is a Windows server. If you're running a *nix variant then you're on your own. For Windows though, check out the app Directory Monitor from DevEnterprise.NET at https://www.deventerprise.net. It's the first part of the equation, namely (and unsurprisingly, given it's name) the directory monitoring. This app is fantastic... it's highly configurable and Just Works™. It's also exactly the ticket.

The next part of the equation is having an Subversion client installed. I loves me some TortoiseSVN (http://tortoisesvn.net). If you're not familiar with it, it's a GUI client for Subversion that integrates with Windows Explorer itself, so you interact with it via context menu entries on files and directories. Fortunately, it also ships with a command line client and that's what we need here. In theory you can use whatever client you want, so long as it's in your path.

The final part of the equation is a VB script that handles the Subversion interactions. That's the bit I wrote and it's kind of the engine of the whole thing. Here's that script:

' **********************************************************************************************************************
' Automated Subversion check-in script by Frank W. Zammetti.
'
' This script is intended to be used with the Directory Monitor application from DevEnterprise.NET
' (https://www.deventerprise.net).  The goal is to real-time monitor a directory for changes and when they are detected,
' execute this script to automatically check all changes into Subversion.
'
' Notes:
'		1. It is assumed that the directory being monitored is a proper working copy of a directory already under
'			 SVN control.
'		2. Directory Monitor must be started with admin privs or the file writes in this script may fail.
'		3. You should be able to monitor multiple directories and handle changes with this same script, however, since the
'			 file writes use the directory of the script itself as the target for file writes, each directory you wish to
'			 monitor should have its own copy of this script and it should be in a different directory from all others.
'		4. This script assumes that it is passed a command line argument that is the fully-qualified path to the directory
'			 being monitored.  Without that, this will all fail horribly.
'		5. There's no error checking throughout this script... partly because there's not a heck of a lot that seems to be
'			 possible in the first place, and partly because what may be possible wouldn't seem to buy us much anyway.
'
' For more information on using this script, see my blog post here:
' http://www.zammetti.com/blog/2015/01/18/building-a-homemade-dropboxtime-machine-thingy-for-vanilla-file-systems
' **********************************************************************************************************************


' Create objects we'll need throughout.
Set oShell = WScript.CreateObject("WScript.Shell")
Set oFSO = CreateObject("Scripting.FileSystemObject")

' If script is already running, abort.  This shouldn't really be necessary because Directory Monitor should be
' configured to not allow concurrent executions, but it's a little added safety net.
If oFSO.FileExists("running.txt") Then
	WScript.Quit
End If

' Determine the path of this script.  This is needed to properly target our file writes.
sScriptPath = Wscript.ScriptFullName
Set oScriptFile = oFSO.GetFile(Wscript.ScriptFullName)
sScriptPath = oFSO.GetParentFolderName(oScriptFile)

' Write our "running.txt" file to avoid this script running again until this execution is done.
Set oRunningFile = oFSO.CreateTextFile(sScriptPath & "\running.txt")
oRunningFile.Write "running"
oRunningFile.Close

' Start creating our results file.
sResults = "----------------------------------------------------------------------" & VbCrLf
sResults = sResults & "Script starting " & DateInfo & Now & VbCrLf

' Grab the command line argument so we know what directory to work with.
sTargetDir = Wscript.Arguments(0)
sResults = sResults & "Target directory = " & sTargetDir & VbCrLf & VbCrLf

' Get the list of changes in the target directory.
sCmd = "svn status " & Chr(34) & sTargetDir & Chr(34)
sResults = sResults & sCmd & VbCrLf
Set oExecStatus = oShell.Exec(sCmd)

' Only continue once the command completes.
Do While oExecStatus.Status = 0
	WScript.Sleep 100
Loop

' Iterate over the changes and handle each appropriately.
Do

	' Get the next change.
	sLine = oExecStatus.StdOut.ReadLine()
	sResults = sResults & sLine & VbCrLf

	' Determine SVN command to execute.  Note that only adds and deletes require
	' use to do anything, the SVN ci command takes care of modifications.
	sOp = ""
	If Left(sLine, 1) = "?" Then
		sOp = "add"
	ElseIf Left(sLine, 1) = "!" Then
		sOp = "delete"
	End If

	' Execute the command, if there is one to execute.
	If sOp <> "" Then
		sCmd = "svn " & sOp & " " & Chr(34) & Trim(Mid(sLine, 2)) & Chr(34)
		sResults = sResults & sCmd & VbCrLf
		Set oExecOp = oShell.Exec(sCmd)
		' Only continue once the command completes.
		Do While oExecOp.Status = 0
			WScript.Sleep 100
		Loop
	End If

Loop While Not oExecStatus.Stdout.atEndOfStream

' Fire an SVN ci command to commit all changes.
sCmd = "svn ci " & "-m " & Chr(34) & "Auto-Commit" & Chr(34) & " " & Chr(34) & sTargetDir & Chr(34)
sResults = sResults & VbCrLf & sCmd & VbCrLf
Set oExecCommit = oShell.Exec(sCmd)
' Only continue once the command completes.
Do While oExecCommit.Status = 0
	WScript.Sleep 100
Loop
' Capture the output of the command in our results.
Do
	sResults = sResults & oExecCommit.StdOut.ReadLine() & VbCrLf
Loop While Not oExecCommit.Stdout.atEndOfStream

' Epilogue in the results file.
sResults = sResults & vbCrLf & "Script finished " & DateInfo & Now & VbCrLf & VbCrLf

' Write results to file.
iForAppending = 8
Set oResultsFile = oFSO.OpenTextFile(sScriptPath & "\results.txt", iForAppending, True)
oResultsFile.Write sResults
oResultsFile.Close

' Delete our "running.txt" file so this script can run next time it needs to.
oFSO.DeleteFile(sScriptPath & "\running.txt")

Copy this down and save it to a directory (NOT the one you'll be monitoring!) and name it commit.vbs and set it aside for now.

Putting all the pieces of the puzzle together is pretty easy. Let's say we're going to monitor a directory C:\docs. The first step is configuring Directory Monitor. Here's the main Directory Monitor window:

2015-01-18_15-26-30

As you can see, I've already done the configuration and have even had some events triggered during my testing. I'll walk you through the setup screens that got me to that point. If you're adding the directory for the first time you'd be presented with these same screens. Here's the first one:

2015-01-18_15-24-19

This is actually the most important of the bunch (well, one of two key tabs anyway). You'll need to check off the events you see checked here to ensure you capture everything that happens in the directory, but no more. You'll also likely want to monitor sub-directories, but that's up to you. The next critical thing is to ensure you exclude any changes that happen in the .svn directory. Failing to do that will result in an endless loop or script executions when we add in the execution of the script (ask me how I know!).

The Text Log tab can be left alone, but for the sake of completeness, here's what it looks like on my machine:

2015-01-18_15-24-38

The Execute tab is the other key tab, and here it is:

2015-01-18_15-25-29

Browse for the commit.vbs script file and select it in the Execute box.  Directory Monitor should modify it to look like what you see here.  Now, in the Parameters box, add the %dirpath% placeholder.  That will be passed to the script so it knows what directory to work on.

Like the Text Log tab, the Sounds, Emailed and Database tabs don't matter in my use case so I've left them with default setting.  But, just to ensure you don't miss anything, here they are as I see them on my machine:

2015-01-18_15-25-48

2015-01-18_15-26-05

2015-01-18_15-26-16

At this point, hit Save and you're basically done!  Directory Monitor will now start monitoring the directory and will fire the script when any changes occur.  You should see a couple of command prompt windows flash across the screen whenever it does. I'm working on a way to avoid that, but it only winds up mattering if you've got a lot of activity in the directory and need to work on the server. In that case, just pause monitoring in Directory Monitor and you should be fine. During this processing, the results.txt file will be written so you can see what happened.  As an example, here's a couple of runs recorded during my testing:

----------------------------------------------------------------------
Script starting 1/18/2015 3:20:48 PM
Target directory = C:\docs</code>

svn status "C:\docs"
M C:\docs\_Our Movies.txt

svn ci -m "Auto-Commit" "C:\docs"
Sending docs\_Our Movies.txt
Transmitting file data .
Committed revision 5777.

Script finished 1/18/2015 3:20:49 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:20:59 PM
Target directory = C:\docs

svn status "C:\docs"
? C:\docs\test - Copy (2).txt
svn add "C:\docs\test - Copy (2).txt"
add - A C:\docs\test - Copy (2).txt

svn ci -m "Auto-Commit" "C:\docs"
Adding docs\test - Copy (2).txt
Transmitting file data .
Committed revision 5778.

Script finished 1/18/2015 3:21:00 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:21:00 PM
Target directory = C:\docs

svn status "C:\docs"

svn ci -m "Auto-Commit" "C:\docs"

Script finished 1/18/2015 3:21:01 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:21:42 PM
Target directory = C:\docs

svn status "C:\docs"
? C:\docs\test - Copy (3).txt
svn add "C:\docs\test - Copy (3).txt"

svn ci -m "Auto-Commit" "C:\docs"
Adding docs\test - Copy (3).txt
Transmitting file data .
Committed revision 5779.

Script finished 1/18/2015 3:21:43 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:21:43 PM
Target directory = C:\docs

svn status "C:\docs"

svn ci -m "Auto-Commit" "C:\docs"

Script finished 1/18/2015 3:21:44 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:05 PM
Target directory = C:\docs

svn status "C:\docs"
! C:\docs\test - Copy (2).txt
svn delete "C:\docs\test - Copy (2).txt"
! C:\docs\test - Copy (3).txt
svn delete "C:\docs\test - Copy (3).txt"
! C:\docs\test - Copy.txt
svn delete "C:\docs\test - Copy.txt"

svn ci -m "Auto-Commit" "C:\docs"
Deleting docs\test - Copy (2).txt
Deleting docs\test - Copy (3).txt
Deleting docs\test - Copy.txt

Committed revision 5780.

Script finished 1/18/2015 3:22:06 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:06 PM
Target directory = C:\docs

svn status "C:\docs"

svn ci -m "Auto-Commit" "C:\docs"

Script finished 1/18/2015 3:22:07 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:07 PM
Target directory = C:\docs

svn status "C:\docs"

svn ci -m "Auto-Commit" "C:\docs"

Script finished 1/18/2015 3:22:07 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:47 PM
Target directory = C:\docs

svn status "C:\docs"
? C:\docs\work
svn add "C:\docs\work"

svn ci -m "Auto-Commit" "C:\docs"
Adding docs\work

Committed revision 5781.

Script finished 1/18/2015 3:22:48 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:54 PM
Target directory = C:\docs

svn status "C:\docs"
? C:\docs\work\test.txt
svn add "C:\docs\work\test.txt"

svn ci -m "Auto-Commit" "C:\docs"
Adding docs\work\test.txt
Transmitting file data .
Committed revision 5782.

Script finished 1/18/2015 3:22:55 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:22:56 PM
Target directory = C:\docs

svn status "C:\docs"

svn ci -m "Auto-Commit" "C:\docs"

Script finished 1/18/2015 3:22:56 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:23:15 PM
Target directory = C:\docs

svn status "C:\docs"
? C:\docs\aaa
svn add "C:\docs\aaa"

svn ci -m "Auto-Commit" "C:\docs"
Adding docs\aaa

Committed revision 5783.

Script finished 1/18/2015 3:23:16 PM

----------------------------------------------------------------------
Script starting 1/18/2015 3:23:21 PM
Target directory = C:\docs

svn status "C:\docs"
! C:\docs\aaa
svn delete "C:\docs\aaa"

svn ci -m "Auto-Commit" "C:\docs"
Deleting docs\aaa

Committed revision 5784.

Script finished 1/18/2015 3:23:22 PM

As you can see, this file will just continue to grow unbounded (another thing I intend to remedy later) so if you're going to have a lot of activity in the directory and are worried about space you may want to comment out the lines in the script that write it (at the cost of a loss of logging obviously).

Well, that's basically a wrap! I can't claim any of this is rocket science or anything, at the end of the day it's pretty simple, but it's quite a useful script and achieves my stated dream 🙂 If you find it useful too then by all means, have at it! If you make any enhancements I'd love to hear about them... this thing is running on my server now so I have a vested interest in it working correctly and running well so if it can be improved I'd love to know about it.