The Demented Ravings of Frank W. Zammetti Visit for all things me


Microsoft, you’ve got to do better by a customer when DEATH (maybe) is on the line!

I have for a long time been a definite fan of the Microsoft Surface line. I have been since I first saw it. It’s always had this feeling of being from the future: thin, light and yet containing all the power of a decent desktop. My wife was the first in my house to get one, a Surface Pro 2 (on sale for $599 at the time). She liked it, but it had some frustrating Wi-Fi issues that I could never get corrected, so she moved on after about a year with it. That didn't sour my opinion of the line though.

I then got a Surface Pro 3 shortly after its initial release back in June of 2014, and it’s been my faithful companion ever since. The thing became my DESKTOP replacement, ably doing all I needed it to do while connected to a big honkin’ monitor, keyboard, mouse and other peripherals. It’s great to have such a mobile device that can be the ONE device to rule them all – no more worrying about syncing apps and data between my desktop and my separate laptop, it’s all the same machine! Put simply, it has served me very well and I've been a happy Microsoft customer.

Until recently.

You see, over the last few months, something weird has been happening. The screen, along the bottom edge, has been warping, bowing outward. The screen began to separate from the chassis. Then, some scorch marks appeared on the lower corners. I had been attributing this to heat all this time – after all, the thing was constantly running, sitting in a dock on my desk, it’s not surprising that heat might be an issue. A Surface arguably was never meant for that configuration, though it worked perfectly well that way (and I don’t recall Microsoft ever saying it WASN’T a valid configuration to use it in).

Here, have a look:

This is probably the "money shot", as they say:

And here are those "scorch marks" in the corners I mentioned:

Interestingly, my SP3 still FUNCTIONS fine despite the physical damage! What I mean is that the screen still works as if nothing were physically wrong with it. It hasn’t begun crashing or anything unusual (although in recent weeks it started randomly turning off every now and again, though I’m 99% sure this IS in fact due to it being a hot summer here, and more importantly, it’s never once done it while I was using it).

Like that watch slogan used to say: takes a lickin’ and keeps on tickin’ (screen deformity aside). I'm surely not HAPPY with what's been happening to it, but at least it hasn't died outright, so that's something.

In just the past few days, the screen has started separating from the chassis along the top now, which is new, and one corner has lifted off entirely (though having it in the dock acts as something of a "clamp" and forces the adhesive to do it's job, at least temporarily). It has clearly gotten worse quickly, despite still working fine.

This past week, Consumer Reports posted an article entitled “Microsoft Surface Laptops and Tablets Not Recommended by Consumer Reports” ( Being a resident of the Surface subs on Reddit, I was not at all surprised to see it engender a lot of discussion there. On one of the posts, I decided to state the following:

I've had my Surface Pro 3 since shortly after it launched. It's the i5/256Gb model. It's been my primary DESKTOP machine since then: it sits in the dock most of the day plugged into a monitor, keyboard, mouse, and other peripherals.

Over the last six months, it's become a mess: the screen has literally warped and now bows out, separating from the chassis along the bottom (I can now get my thumb into the gap without forcing it). In the two bottom corners, there are burn marks where the LCD obviously had too much heat behind it. So, all of that is really, really bad.

But, here's the thing: it still basically works perfectly! Yes, if the ambient temperature gets above around 90 then it starts doing some random shutoffs, but that's fairly rare (and it's never shut off on me in the middle of using it interestingly). The screen, despite the deformity and burn marks, still FUNCTIONS fine too. When I run various tests on the whole thing, including some hellacious burn-in runs, it never stumbles at all.

So yeah, nasty physical damage, but still functions perfectly. I'm not sure every other laptop/tablet would be able to keep going like this thing has.

And, let's not forget that my usage pattern for it isn't really what it's intended for. It wasn't really intended to be a desktop replacement, yet that's what I've used it as, and it has performed admirably. So, while I could argue that the type of damage I'm seeing due to heat shouldn't ever have happened for the money I paid for it, it's also amazing that it still works great despite all of that. It's also fair to point out that until the damage started becoming apparent, I used to leave it on 24x7. So, again, driving it harder than perhaps was reasonable.

This, as you might expect, got some comments. One comment, in particular, was most interesting to me, courtesy of a helpful Redditor:

Take it to a Microsoft Store. You're dealing with battery swelling. Microsoft will replace a device that has a battery swelling issue regardless of warranty period. As long as your device doesn't show signs of physical drop damage, they'll replace it. That's a safety hazard with the swelling battery.

Revelation! I had never considered that it might be a battery issue! My mind has always been stuck on “heat issue.” But, after I read that, I took the thing out of the dock, shined a flashlight in there, and what do you know, it did indeed look like a swelling battery! Like, OBVIOUSLY so! I wish I had thought of that possibility myself, but I didn’t! (let that be a lesson kids: never allow yourself to become so convinced one answer is right that you stop considering any other possibilities!)

Well, fortunately, I have a Microsoft store 20 minutes from me, and in the biggest mall in the country no less, so I figured it was worth a shot. Especially since the device is otherwise in pristine condition, never been dropped even once, and this is a safety issue, as this Redditor said, I figured the worst that could happen was they said no.

Well, unfortunately, that’s exactly what they said.

They immediately acknowledged that yes, the battery is indeed swelling, that's what's going on. No debate. They also told me that while it's not a common thing, it's also not unprecedented (they said they'd never seen one as bad as mine though). But, all that said, the manager (who I was told has discretion on this) would NOT swap it.

I mean, to a certain extent, I get it: it's out of warranty, even out of the extended warranty (MS Complete Care that I paid extra for), and as Chris Rock said: “When you leave a restaurant, they don’t owe you a steak!” (NSFW, as you’d guess with Rock, but funny as hell: It’s not like they owe me a brand new $2000 device.

Or do they?

I mean, at the same time, they made a big deal about how it's dangerous, and yes, of course, it is! Just ask Samsung! The store wanted to rightly keep it and dispose of it, even started paperwork for that. Unfortunately, I had to say no way, I'm not walking out of the store without a Surface, preferably a new one, but the potential bomb one if necessary (this was before the manager gave his answer). Remember, this is my PRIMARY computer at this point, and I need it to make a living, being without a working PC isn’t really an option.

But nope, they weren't willing to do anything. They didn't even offer to give me a new one for a significant discount, which I might have been okay with. In fact, in my mind, I was willing to go up to $500 for a newer Surface. It really shouldn’t have cost me a dime, but I was willing to bend a bit on that because I would, after all, be getting a newer, better device. That didn't strike me as entirely unreasonable, but I wasn't even offered that option.

So, while I kinda/sorta understand their perspective, I'm still not at all happy, and I don’t fundamentally think it’s right. I mean, in the end, is it going to cost them less to give me an SP4, or to get sued by me when it blows up in my face? I know which I would do if I were the manager. And sure, they could claim I was informed of the danger and used it anyway, and they’d be right and probably win in court. But again, just ask Samsung about the damage bad publicity can do. For the cost of a new Surface, which is NOTHING to Microsoft, isn’t it worth avoiding that possibility entirely? Seems like it to me.

Ultimately, they’ve lost a customer here, and a customer who had sung the praises of the Surface line since 2014 (just check out my Twitter history, I’ve been a big supporter over that time). Sure, it was my choice to leave with a messed-up and even dangerous device, but was it really a choice? My other option at that point was to walk away empty-handed or spend a ton of money on a new device that I don’t otherwise want (because the SP3 still is doing the job for me). I’m effectively out money either way, right?

Instead, my money is going to Dell now: I just put in an order for an XPS 15. When it's all set up, I'll bring the Surface to the MS store for proper disposal. To some extent you might say it was a spiteful purchase: I’m not exactly rich and the Dell isn’t cheap. But I’m entirely unwilling to give my money to Microsoft again at this point (and not for nothing but the XPS 15 is a KILLER machine anyway).

But, when we're talking a safety concern like this (their words, not mine remember) then they should have gone the extra mile to help me out in my opinion (and not JUST my opinion). But they didn't want to do that, so now I'll go a different direction, and it’s not a Microsoft direction any longer.

I’m sad about all of this because as I said, I’ve been a fan and I still think the Surface line is excellent overall, even despite the Consumer Reports story and this experience. In fact, I’m using it right now to type this up, and I will continue to do so until the new machine gets here - though I’ll no longer leave it on or even in the dock when I’m not actually using it – and I reminded myself where the nearest fire extinguisher is!

Am I wrong for thinking they should have done SOMETHING for me though? I don’t think I am. Again, this is a safety issue. Even if they don’t care about me personally, shouldn’t they care about the bad publicity an exploding battery could cause, especially given the knowledge of Samsung’s recent woes? I would think so. I was willing to bend a bit and spend SOME money, but they didn’t even offer that opportunity (and frankly, I didn’t bring it up – I wasn’t there to spend money after all). In point of fact, the manager didn’t even do me the courtesy of coming out to directly speak to me, which would have been warranted I think.

And yes, maybe you could chalk this all up to some bad store staff, but the truth is that aside from not getting what I considered the right answer, they were all very friendly, polite and I think ATTEMPTED to help me as best they could. I certainly have no complaints about how I was treated aside from not being truly helped in the end. Also, I’ve tweeted now to, I believe, three different Microsoft Twitter accounts pictures and comments about this issue over the last few months, and the response was always “go to a store.” Well, I DID go to a store, and got NO help. So, I’m not so sure I’m willing to chalk it up to simply one “bad” store (a store I otherwise have always had good experience with by the way).

So, sorry Microsoft, but you’ve lost a customer here. I’ll confess that part of me hopes someone from Microsoft sees this and contacts me to do right by me. I’d be more than happy to post a follow-up about that if it happens (a good redemption story is, err, good!) But, at this point, I’m not counting on it. In the absence of that, you’ve not only lost a customer, but you’ve lost a vocal FAN.

And that may be the biggest shame of it all.


The night of my visit to the store, based on further advice from a Redditor, I submitted feedback via the Microsoft feedback system. I gave my phone number and said I was reachable between 4pm and 8pm. Well, the very next day at 4:01pm, I received a call from someone at the store (I honestly didn’t catch if she was the manager or not, though I suspect she was). She asked me to describe my experience, and I did. She apologized profusely, and said I definitely should have been offered SOMETHING. She said that she could offer me a replacement Surface Pro 4 for $599. Here’s the key though: she said it would be the SAME SPECS as my SP3.

Now, to be honest, I MIGHT have accepted that if they had offered it to me immediately the night I visited the store, but I’m glad I didn’t because upon reflection, that’s not a good deal either. Think about it: doesn’t that effectively mean I wound up paying $599 extra for my device? Sure, you could argue than an SP4 is inherently better than an SP3 even for the same specs, but the fact is I would NOT have bought an SP4 for the price I paid for the SP3 PLUS $599. The SP3 was already arguably overpriced and I never could have justified the purchase for $599 more. Now, if they said “okay, you can have a top-of-the-line Surface Pro 2017 for $599”, that’s a different story. Even possibly the best SP4 might have been reasonable.

I want to be fair here, because for one thing, they obviously did call me immediately after seeing my feedback, and that’s got to count for something. And, they did ultimately offer me SOMETHING, which also should count for something. But exactly how much it all counts for is debatable when the final answer wasn’t acceptable.

Besides, remember, as I’ve said before: we’re talking about a potential SAFETY issue here. It’s literally a DANGEROUS device now, and is so not through any fault of mine but because of a manufacturing defect. How can ANY company justify not doing a straight swap, even if it means they lose some money by providing a newer, better device?

You know, I’m usually a person who is extremely non-confrontational, and I’m also a person who doesn’t believe in getting something for nothing. It’s that mentality that kept me from trying to get this addressed for so long: I honestly felt like I was trying to pull something somehow. After being given a different perspective by helpful people on Reddit though, my mindset changed completely. Once I understood that this is a safety concern, things feel quite a bit different, and it leads me to feel like Microsoft did not do right by me, even after they at least made SOME attempt to do so upon seeing my feedback. I still have a hesitation about all of this because I don’t want to get anyone in trouble at the store. That’s not my intention or desire. But, given how I now view this whole situation, I’m not sure that’s my primary concern any longer.

If nothing else, I hope this post informs anyone who might experience the same or similar issue about how they might want to perceive things. When you’re talking about a potentially exploding battery then you’re talking about a fire hazard and so you’re talking about a safety concern. You might want to be more forceful than I was in getting Microsoft to do right by you (or indeed any other manufacturer), and even if you don’t then at least don’t make the mistake I did and not recognize the problem for what it is. If that’s all this post accomplishes then that’s good enough for me.

Filed under: Uncategorized No Comments

On the feasibility of traveling to Proxima Centauri B

So, as everyone has heard by now, scientists this week announced officially the discovery of a (maybe) Earth-like planet orbiting our nearest neighboring star, Proxima, which has been dubbed Proxima Centauri B (heretofore referred to as PCB). Aside from us Babylon 5 fans being excited as hell (Proxima colony?! The Centauri?!) everyone else was too because this planet looks to be in the Goldilocks zone of its host star, appears to be rocky and isn’t unfathomably bigger than Earth. In other words, so far (and it’s early so things could change) it appears to have a lot of the characteristics we expect of a planet that could support life.

But, the bigger reason for the excitement is simply because of its relative closeness to us. “WE COULD ACTUALLY GO THERE!” exclaimed the Internet in unison.

But could we? Could we really get there?

If you wanna TL;DR this thing, here it is: no, not with today’s technology, at least not in a time frame that would do us much good. But, that’s a pretty nebulous statement, isn’t it? Exactly how long WOULD it take? Exactly how long COULD it take it if we really pushed our technology in the near future? What IS a “useful” time frame anyway?

Well, I’m no scientist of course, but I do play one on the Internet so, thanks to the magic of Wolfram Alpha, lemme see if I can throw down some numbers and come up with something a little more concrete.

First, some basic facts that we’ll need, and note that I’m going to round everything here just to make the numbers a little cleaner, but they’re all very close to actual, certainly close enough for our purposes here:

• PCB lies 4.3 light-years from us, which is 25 trillion miles
• The speed of light (C) is of course 186,000 miles per second, which is about 670 million MPH (apologies to all my non-American friends, but I gotta calculate with what I know, and I don’t know kilometers – I was educated in the American educational system after all!)
• 1% of C is therefore 1,860 miles per second, or 6.7 million MPH (why this matters is something I’ll come back to shortly)

With those basic facts we can now do some pretty simple math (which I’ll probably screw up somehow none the less, but that’s what comments are for!). First, let’s use that 1% figure I mentioned. Traveling at 6.7 million MPH, how long does it take to reach PCB? Well, that’s just the distance to PCB, 25 trillion miles, divided by 1% C (6.7 million MPH), which yields 3.7 million, and that’s in hours. Ok, so, how many days is that? Just divide by 24 obviously and we get 155,000 days. And how many years is that? Divide by 365 (screw you leap years!) and we find it’s 425 years.

Ok, that’s a long time for a little trip around the galactic neighborhood. Never underestimate how mind-bogglingly big space is!

Even at 1% C that’s not really feasible for us to do, even if technologically we could achieve it. But, that depends on what “feasible” means, doesn’t it? Well, beyond the technology, I would suggest that “feasible” refers to being able to get there, one way, in a single human lifetime. That’s really what most of us want after all, right? We want someone to go there and be able to report back what they find. We could of course send automated probes, and logically that’s what we’ll wind up doing at some point (whether this planet or some other yet to be discovered). But, a probe isn’t as good as a person to most people. It certainly doesn’t capture the imagination quite like humans going does. Our rovers on Mars continue to capture my imagination for sure, but I’m big-time itchin’ for someone to step foot on the red planet and use the human capacity for poetry to describe it to me (and I’d go in a heartbeat myself and cut out the middle man entirely!)

That’s the baseline assumption here: we want to be able to get there in a single human lifetime. In order to make this easier, let’s be a bit cold about it and say that we don’t care about coming back, nor of our intrepid explorers being able to survive there for very long, and “lifetime” we’ll define as a long human life. So, given that, what kind of speed do we need to be able to accomplish? Well, now you know why I did the 1% of light speed calculation: it makes it easy to extrapolate out! We just divide 425 by the percent of C we’re going to travel to see how many years it would take at each level. And, doing so yields the following table:

• 2% = 212
• 3% = 142
• 4% = 106
• 5% = 85
• 6% = 71
• 7% = 61
• 8% = 53
• 9% = 47
• 10% = 43

Given that the current average human lifespan across the entire planet is around 71 years, we probably don’t want to send anyone that will be older than that upon arrival to have the best shot of, you know, everyone actually being ALIVE by the end! That means that we can’t send anyone older than 18 (53+18=71). That’s probably too young though, we probably want people in their 20’s realistically so as to have enough time for training (not to mention someone theoretically old enough to make the choice to go and understand the ramifications). That means no older than 24 at 9% C and we can go up to 28 at 10%. So the sweet spot is probably right in there.
So, to put it plainly: we probably need to go 9% or 10% C and send no one older than 28 to have any chance of this working. Let’s call it the 10/28 rule ?
But, how realistic is 10% of C for us? Well, NASA’s Juno probe is recognized as the fastest man-made object, attaining a top speed of 165,000MPH. That works out to about .02% C. So, it’s 50 times too slow.

D’oh 🙁

And that doesn’t even account for acceleration and slowing down, things which are inherently more important for us squishy meatbags than our hardened technological creations.
This all tells us that we’re probably not doing this with current technology, not if we want to send people. In fact, it doesn’t seem even remotely possible if we just wanted to send just a probe. Even though a probe allows us to accelerate and decelerate much faster, making the overall average speed of the trip better, the bottom line is we still can't touch these speeds whether there's people on-board or not. I don’t honestly know how much faster than Juno we could pull off today, but I’d be willing to bet even 1% C would be a (likely impossibly) tall order. Therefore, we have to look to theoretical propulsion if we want to make this trip in any fashion. That’s always tricky of course because being theoretical, quite obviously, means that they aren’t real yet, and might never be. We don’t know FOR SURE they’ll work or what kinds of problems we might encounter even if they do.

However, the thing that’s encouraging is that there are quite a few theoretically propulsion systems on the drawing boards that aren’t what would be considered by most to be fantastical. We’re not talking warp drives or wormholes or anything like that. No, we’re talking things like nuclear propulsion, plasma drives, solar sails, fusion, things that we either know work and just need to be scaled up (which makes it a pure engineering challenge) or things that we have every reason to believe is feasible given some time to develop (fusion). If we said, for example, that we have 100 years starting from today to create the technology needed get to PCB, that wouldn’t seem impossibly far-fetched. Highly optimistic, yes, and we almost certainly wouldn’t be talking about sending people, but it wouldn’t sound utterly ridiculous. And you'll note that 100 years isn't too much longer than a typical human lifetime - it's right around 1.5 human lifetimes in fact. That, in theory, puts this within the reach of the unborn children of someone alive today. That's not so bad!

So, in summary: the bad news is that we’re not going to PCB, whether people or just technology, with what we have now. We’re simply too slow at the moment. The good news however is that we do have some ideas currently being developed that might have a shot at doing it in a generation or two AND getting there in a semi-reasonable time frame AND which aren’t pure science fiction. There’s a ton of if’s and’s and but’s that you have to qualify that optimism with, but it’s probably not an outright impossibility in the foreseeable future. That’s pretty exciting!

Speaking as someone who would sacrifice arbitrary body parts to be able to see this planet in his lifetime whether with his own eyes or through technology, I’m depressed. My children have an outside shot... maybe... and my grandchildren a better shot still, but that doesn’t help me 🙁

I guess I’ll just go buy a PS/4 and play No Man’s Sky. Seems that’s as close as any of us alive today is ever going to get.

Filed under: Uncategorized No Comments

Building web UIs: an unconventional viewpoint (now with surprising supporting evidence?)

For a long time I've been becoming more and more convinced that the approach to developing complex web-based SPA UIs is going in the wrong direction. There is now an incredibly heavy dependency on CSS and markup and it's made modern web development a lot more complicated than it's ever been before (and that's not even talking about all the tooling typically involved). I personally find understanding a UI holistically to be more difficult when it's mixing complex CSS and markup (and then usually some JS too). I know some people love that and think it's the Best Thing Ever(tm) but I respectfully disagree. I think the development of responsive design has been a big driver of this, though not exclusively. You can of course do responsive without CSS, but I suspect it's largely been a case of "this is the first way someone figured out how to do it and now everyone thinks its the right answer" without really considering alternatives very much. Just my opinion of course.

What's interesting to me is that I've spent a few years developing highly complex UIs, SPAs of course, that have taken a different approach: a code-centric approach. It's more like... dare I say it... Java Swing development then web development in a sense. Rather than creating a basic layout in HTML and then styling it with CSS and applying JS for interactivity I've instead done most of my work by writing JS that generates the UI for me. CSS becomes more like it's original intent: just very basic look-and-feel styling. Things like animations and such aren't done in CSS (at least not at the level of me developing the UI). A lot of this work has been done using ExtJS, but not exclusively, so obviously there's a lot of markup and CSS and JS going on at a level below my work, but that's irrelevant to my main point, which is that it's building a UI through code rather than markup and styling directly.

Of course, I've done my fair share of more "typical" UI development at the same time so I've had a great chance to compare and contrast the approaches. I've always felt that the code-centric approach just made more sense. It always seems to me to be easier to understand what's going on (assuming the library used is expressive enough, as I find ExtJS to be). Things like responsive design are pretty easy to achieve with this approach too because now you've got JS making the breakpoint decisions and elements and all of that and that too just works better with my mind than does the HTML/CSS approach.

Of course, a code-centric approach isn't without it's problems. For one, it obviously requires JS be available. However, I frankly no longer see this as a disadvantage. I think JS has become one of the three key pillars of the web (along with HTML and CSS of course) and it's no less natural to expect JS to be available than to expect HTML and CSS to be available. If a user chooses to turn JS off (as I sometimes do) then woe be unto them if things don't work. I'm no longer interested in graceful degradation, not as far as JS goes anyway (and with a code-centric approach, any other GD concern largely goes away as well).

Another pitfall is that the code-centric approach can also be more verbose and overall requires more "stuff" be written. This, again, is something I don't necessarily see as a problem though. There's a drive towards terseness in programming generally today that I find to be counterproductive. Verbosity isn't the measure that matters really, it's expressiveness. Sometimes, short can be very expressive for sure, but often times I find short is the result of laziness more than anything else (programmers REALLY seem to hate typing these days!) and the result is code that's harder to read and comprehend in my opinion. There are of course limits: I'm not a fan of Objective-C and its long names, but one could hardly accuse it of not being expressive.

All of this is highly debatable and I'd be the first to admit it, but it's my blog so let's go with it 🙂 Anyway, I'm not here trying to convince you that I know the One True Right Way(tm) to develop modern webapps or that you're doing it wrong. Philosophical musings are fun, but it's not my purpose here. No, this article is about one specific problem of the code-centric approach that I didn't mention yet.

Or, at least, something I EXPECTED would be a problem: performance.

Let's jump in and look at a bit of code, shall we?

  <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/>
  <meta content="utf-8" http-equiv="encoding"/>

    var startTime = new Date().getTime();

    function calcTime() {
      var endTime = new Date().getTime();
      console.log("browser", (endTime - startTime));

  <body onLoad="calcTime();">
    Username: <input type="text"/>
    <br />
    Password: <input type="text"/>
    <br />
    <input type="button" value="Logon"/>

It's not getting much simpler than that! One would expect that runs quite fast, and indeed it does: I sat there reloading it off the file system a bunch of times in Firefox (47.0.1) and on my machine (Surface Pro 3 i5 256Gb) it averages around 100ms to display the elapsed time in the console. It fluctuates a fair bit: I saw as low as about 70ms and as high as about 200ms. I'm frankly not certain what the discrepancies are all about, but ultimately it doesn't much matter because they were outliers anyway and we're looking at rough averages here.

Now, let's look at a code-centric approach version of that same example:

  <meta content="text/html;charset=utf-8" http-equiv="Content-Type"/>
  <meta content="utf-8" http-equiv="encoding"/>

    var startTime = new Date().getTime();

    function buildUI() {

      var f = document. createElement("form");
      f.appendChild(document.createTextNode("Username: "));
      var i1 = document.createElement("input");
      i1. setAttribute("type", "text");
      f.appendChild(document.createTextNode("Password: "));
      var i2 = document.createElement("input");
      i2.setAttribute("type", "text");
      var b = document.createElement("input");
      b.setAttribute("type", "button");
      b.setAttribute("value", "Logon");


    function calcTime() {
      var endTime = new Date().getTime();
      console.log("js", (endTime - startTime));

  <body onLoad="buildUI();calcTime();">

Note the paradigm employed here: there is no content on the page initially, it's all built up in code onLoad (no sense doing it sooner since there's no DOM to build up after all!). The loaded page you can think of as a blank canvas that gets drawn on by the code rather than the completed painting that the browser automatically displays as a result of parsing the document. Obviously, the onLoad event is going to fire here sooner than the plain HTML version because there's nothing to render.

But, what's your expectation of how this version compares to the plain HTML version? That code in buildUI() has to do its work, building up the DOM, and then the browser has to render it when it's appended, right? Does your gut say that this will take longer to complete because surely the browser can render static HTML in the body loaded as part of the document faster than it can process JS, build the DOM and render it under the control of JS?

That certainly was my expectation.


Doing the same sort of "just sit there hitting reload a bunch of times" dirty timing, I got an average of around 60ms (and again, there were a few outliers in either direction, around the same number of them as the plain HTML version, so my guess is those are simple IO delays and in any case we can ignore them in favor of the rough average again).

I mean, it's not just me, right? That's kinda crazy, no?! And, that's not even the most efficient way one could write that code, so there's probably a chance to increase that performance. But what's the point? It's already faster than the plain HTML version!

Now, to be fair, I can't necessarily extrapolate this out to a much more complex UI and assume the same would hold true. That's frankly a test I still need to do, but it's also a much harder test to set up properly. But, just seeing this simple test is frankly eye-opening to me. I always assumed that my favored code-centric approach was sacrificing speed in the name of other benefits, but that may not actually be true.

So, is it just me? Is this common knowledge and it's something I just never realized? Or, is it pretty surprising to you like it is to me? Or, did I just screw up the analysis somehow? That's always a possibility of course 🙂 But, I think I got it right, and I'm surprised (err, by the results, not that I think I got it right! LOL)

Filed under: Uncategorized No Comments

Did I just figure out how aliens will communicate with us?

So, you've heard about the weird "stuff" that's been found around star KIC 8462852, right? If not, here's a link:

In short, there's something orbiting it... a lot of somethings more accurately... that we've never seen before. Now, before you post your Giorgio Tsoukalos does his "I'm not saying it's aliens - BUT IT'S ALIENS!" memes all over the place, there's a number of rather mundane explanations... though here, "mundane" only has meaning relative to the notion that it could be aliens. Planet collisions and things of that nature are a distinct possibility (and probably most likely).

But, one scientist, Jason Wright, an astronomer from Penn State University, made some interesting comments that the light pattern we see resulting from these whatever-they-are's kinda looks like the type of thing we might expect an alien civilization to produce as a result of some mega-big constructions (think Dyson Spheres, Ringworlds and that sort of "OMG THAT CAN'T REALLY EXIST!" sorts of big-ass things).

Now, if you ask me right now to place a bet I'll give you whatever odds you want that this IS NOT aliens. It's probably still going to be something very cool because it's not like anything we've seen before, but almost certainly it's natural.

But, that got me to thinking... or more precisely, the comments of someone on Reddit discussing this did... he asked about radio waves. He thought that particles (dust and such) distributed through interstellar space would reduce radio waves to a point that they were entirely undetectable before long. Seems logical, right? Ok, so, if that's a given, an advanced alien species that could build unbelievably massive things in space wouldn't be using radio to communicate. What WOULD they use? LASERs? Maybe, but they're too precise- they have to be aimed at the receiver pretty accurately and if you don't know where it is you waste a lot of time and energy beaming out in all directions.

In fact, this is the basic problem with most things that (a) will still be detectable at a great distance and (b) which you intend to not destroy any life on the receiving end (I mean, maybe gamma ray bursts are an alien civilization's calling card... but if so, they're VERY impolite since they can destroy all life on planets if they're too close).

Then, it kind of dawned on me how I'd pull it off: "reverse" Morse code!

Let me explain... imagine you could build massive structures around a star... imagine you could get the to align very precisely and orbit the star very precisely... now imagine that another civilization could detect the drop in light coming from the star and even could do so over time, yielding a pattern to the drops in light.

You could encode a message doing that in much the same way you type out a message in Morse code, but it would be a "negative image", so to speak: the DROPS in light are where the message is encoded, NOT the light itself (in Morse code terms: the message would be encoded in the silences between dots and dashes, not the dots and dashes themselves).

I mean, think about it: even to an advanced civilization it's very likely that a star is still many orders of magnitude more energy that they can themselves produce. Given that, why try to produce energy in ANY form, encode information into it and beam it out into space when you ALREADY HAVE SOMETHING DOING IT, and doing it much better than you can? All you need to do is figure out how to encode information into the star's emissions. That's where Reverse Morse code comes into play.

Now, I don't really know if this is an original idea or not. I do know that I can't recall ever having seen it anywhere before, and I also know that Googling "reverse Morse code" does not yield any result that describes what I am here in any context. I'm actually sitting here wondering if I just came up with something really brilliant or if it's an old idea I've just never heard of. I'm certainly betting on the later, especially given all the phenomenally smart sci-fi authors out there that think about this stuff all the time.

But, on the off chance this is an original idea, I wanted to make sure I got it written down somewhere because original or not, it really DOES seem like the way advanced aliens would try to communicate with us, especially if we assume that the speed of light is the same impediment that it is for us and that no "work-around" exist or are exploitable. Certainly no technology we know of or can realistically imagine building would allow for communication across the types of distances we're talking about. Using the power of a star would cut out one of the biggest problems you have with communicating across great distances.

The question I have and can't answer is exactly how precisely we can measure transit events right now. Do we have the technology to discern a pattern like I'm describing, whether we can decode it or not? I don't THINK we do yet, I think we're just at the point of "this much light drops out in total so we know something's there" and that's it. But then again, I'm not sure I understand how we could know there's "stuff" around this particular star then. Don't we need to be resolving at a somewhat finer level than "total light lost" to be able to say that?

I'm no astronomer, astrophysicist or anything like that. I'm just a guy that thinks about science stuff A LOT. So it's far more likely this is just a neat idea for a sci-fi short story 🙂 But, I've asked a few real scientists to have a look and give me their take on the idea... I'll let you know what I hear!


EDIT: The following was added on 10/15, after the original post:

So, I tweeted Phil Plait, the famed "Bad Astronomer" himself and got a reply pretty quickly (thank you again Phil VERY much for taking the time to read my blathering!)... I'll quote his reply here:

"Fun idea. But radio waves travel for a LONG way, and are a lot easier. :)"

So there you have it: probably not what I said. Unsurprisingly. But, the fact that he didn't poo-poo the idea outright makes me feel pretty good anyway 🙂 I'm certainly not about to argue with a trained, professional astronomer... though I do feel the need to point out that I've read a number of sources that say radio waves don't propagate quite as well or as far as Phil seems to indicate... still, I don't doubt he's right: radio is simply a lot easier than building some unimaginably huge structure around a star regardless of how advanced a civilization we're talking about. It's probably not a trivial exercise for ANYONE and so you still have to imagine it's pretty unlikely to be the case. Fair enough.

Interestingly, Phil sent another tweet my way a little after that first one:

"Well now, I spoke too soon. 🙂"

That's a paper entitled "TRANSIT LIGHT-CURVE SIGNATURES OF ARTIFICIAL OBJECTS". So, you know, real scientists are out there thinking about what I had, which is cool! It also means my idea isn't original, but I already said it almost certainly wasn't (although that paper doesn't discuss it as a communication mechanism, I'm just going to assume that came up between the authors, or someone else somewhere came up with it too).

The deal with this particular star though is still something really interesting to say the least because the luminosity drop observed is something like 22%. That's A LOT! A Jupiter-sized planet transiting a star general results in a drop of around only 1%. Obviously, that's a huge difference. That makes this even more interesting if you ask me (and you're here, so I'll just go ahead and assume you did). The most likely explanation still seems to be things like cometary collision, but even for that it's kind of hard to imagine a debris cloud that large.

One thing that I see some people say when they see that 22% figure is "OMG, an object that occludes 22% of the light from this star! That's amazing!" Well, sure, it WOULD be, granted, but that's also not necessary. It doesn't have to be one single coherent object- a debris cloud that occludes 22% of the light IN AGGREGATE does the trick just as well and is a lot easier to contemplate, hence the cometary collision idea being what I'm seeing most astronomers say is the most likely scenario. Even still, it's hard to imagine a debris cloud larger than Jupiter by 22%... something tells me that's not the whole story.

So, really, at the end of this day, what we have here is something REALLY weird. Something we haven't seen before. Something that has at least some characteristics of what we think we might see from an artificial structure. But, we've got a couple of far more likely explanations that are all-natural, and all-natural is good in food so it's probably good in astronomy too. However, with all of that said, astronomers are now working pretty quickly to get additional observations off the ground, if you will (I believe I've read a January timeframe is the target). There's a lot of excitement over this, understandably so. It's either artificial and therefore something game-changing, or it's natural and almost as game-changing... or it's got a totally mundane explanation that we've so far missed of course, that's always a possibility. But right now, it really does look like a big deal whether it's aliens or not.

As for it being a communication mechanism like I proposed? Eh, I suppose it still COULD be, but it was always a pretty unlikely idea. But, since we're already talking about an unlikely idea, let me put on my sci-fi writer hat and make it even MORE unlikely, but still not ENTIRELY implausible...

Imagine you are an advanced alien civilization. You decide that building a giant screen in front of a star to send what amounts to a form of Morse code is a good idea because that energy will propagate through space better than any energy source you can produce yourself. Now, we have to assume such a civilization has the same fundamental limitations as us, which means no faster-than-light travel, which means they're just as confined to a single star system (or a small handful) as we are. That also means that two-way communication isn't possible for the same fundamental reasons.

Or is it?

You've heard of quantum entanglement, right? The idea that two photons, separated by an arbitrary distance, can instantaneously reflect changes in quantum state. This is interesting because what appears to happen is essentially an exchange of quantum "information" at superluminal speeds. This is a real phenomenon, not a theory. It's something scientists can experimentally observe in real life. Einstein called it "spooky action at a distance", and that's an apt description!

Now, before we get too excited, scientists have already ruled out the possibility of using this as a form of communication. That would necessitate violating causality, something we don't believe can happen. It DOES have some potential uses in communication anyway: there's some ideas about using it to ensure a message sent isn't tampered with in transit, but that's a very different, and much simpler, thing.

But... since we're playing with ideas here... what it we're wrong? What if it IS possible to use quantum entanglement to communicate? Let's assume that for a moment. What might our advanced civilization do with that knowledge if they had it? Well, I'll tell you what *I* would do in their shoes: you know that starlight I'm already playing with to send Morse code-like messages? Maybe I can quantumly entangle them on the way out from the source star. If I could, I might also encode in the message itself information on how to build a transceiver.

If you recognize that idea it's because it essentially is the story from Carl Sagan's Contact. We're not talking about building a wormhole transit machine here of course, because that's much harder (I presume). But a radio that can use quantumly entangled photons? Maybe not so hard (if, once again, physics allows it at all). The medium of the message, the starlight, would essentially include the information necessary to allow two-way communications.

I think the idea has a certain logic to it... I beam my simple reverse-Morse code message out everywhere in all directions using my local star are the power source... the breaks in the starlight provide the message, including the plans on how to build a device to allow two-way communication using the quantumly-entangled starlight that the receiver already has access to. All of a sudden, assuming a civilization is advanced enough to receive the message, decode it and follow the instructions to build the device, and are willing to do all of that, then the originator can have two-way real-time communication with them!

It's a nice idea, though it relies on breaking what we thank are some fundamental laws of quantum mechanics... which just so happens to be our most successful, well-tested and verified theory to date and which is responsible for most of our modern world. However, even if that theory isn't as immutable as it appears now, there's still the not at all insignificant problem of time: the original signal that tells us how to communicate with the advanced civilization will still take thousands of years or more to reach us. There's simply no way to know if there's anyone on the other end to talk to anymore. I haven't conjured up a way around that problem yet, even a ridiculous one. That seems like the truly fundamental stumbling block to this whole communication idea.

Still, one can dream - and, until we know for sure what's actually there around KIC 8462852, that's exactly what I'm prepared to keep doing 🙂


The problem with Software as a Service subscriptions

We are, without question, in the Software as a Service (SaaS) era. Perhaps just at the start of it, but we’re in it for sure. There’s a new economy emerging and it’s one that has considerable pluses and minuses for all parties involved. This is the era of software subscriptions and it’s an ever-expanding era.

To be sure, SaaS doesn’t AUTOMATICALLY mean subscriptions. You can certainly offer SaaS without the need to subscribe to it. But that’s not what we’re here to talk about today. We’re discussing the subscriptions aspect of SaaS specifically, so when I say SaaS going forward understand that I’m talking about subscriptions.

I don’t know who actually started it, but everyone is doing it nowadays: Adobe, Microsoft, Jetbrains, IDM, the list goes on and on. As a customer you are now frequently asked to not actually purchase a piece of software outright but instead to purchase the RIGHT to use it for some period of time. Some will argue that software has ALWAYS been that way: EULAs for a long time have basically been grants to use software, not a transfer of ownership. However, that’s largely a matter of semantics. The fact is that when you purchase a boxed piece of software (or download it) and you aren’t paying a subscription then you do, for all practical purposes, “own” that software regardless of what the EULA says.

It’s quite easy to understand why subscriptions are a Very Good Thing™ from the vendor’s point of view: this is guaranteed income! It’s a roughly static revenue stream (well, you HOPE it’s NOT static and always grows, but it’s static in that once someone signs up you know how much income you’re getting from them and for what period of time). It also has the benefit that they can support a single version of an application. That reduces their effort and cost thereby increasing their income even further. They can also shut you off at any time if the need should arise for any reason and makes software piracy much easier to stop (theoretically impossible). In other words: it makes tremendous sense for a vendor in many ways.

But does it make as much sense for consumers?

Well, there’s some clear benefits of course. First, you always have the latest and (in theory at least) greatest version of an application. We all love hitting that update button on our phones and seeing new features in apps we use and the same is generally true for the software we use on our PCs, MACs and other desktops too. It’s also less effort: updates are either pushed to us transparently or we’re alerted to an update and encouraged to accept it. It’s all very convenient and kinda fun to get the New Hotness every so often!

We also know what our expense is up-front. There’s no more deciding whether to purchase a new version, whose price may have increased, because you’ve already essentially paid for it with the subscription and you knew up-front how much the cost was. And, because you’re paying at some regular interval, the cost is spread out and well-defined each time. It’s almost like paying for your software on credit in a sense and that ability to “budget” the cost is attractive to a lot of people, more so than a large lump sum payment is (hey, I’m generally in that boat for sure!)

But, there’s also significant down sides.

Perhaps the biggest is the potential for breaking changes. If the vendor pushes an update that breaks something, especially if it’s something you use all the time, that’s a bad day for you. And, when those updates aren’t by choice and you can’t roll back, you’re in an even worse situation. This is of course the big complaint against Microsoft’s policy with Windows 10, and it’s starting to look like they may acquiesce just a little on this point. Also, as is the case with Windows 10 updates, the vendor is under no obligation to even communicate to you what the changes are! You just have to trust that they (a) are delivering things properly and (b) are delivering things you actually WANT.
It’s not just about breakages either: a vendor can arbitrarily make changes - just because! Remember with Microsoft introduced the Ribbon to Office? Many people positively HATED it. So, what was the alternative? Simple: don’t buy the newer version that has it and continue to use the version you’ve already paid for. If you have a more or less steady base of subscribers then you don’t have the benefit of tracking sales over time to determine if the changes you made were well-received or not. Sure, there’s other ways to illicit feedback from customers, but why would you bother? You already KNOW what’s best for them or you wouldn’t have put the change in there to begin with! The only feedback that really matters in such case is lost sales, and that doesn’t happen with subscriptions.

If that had happened during this SaaS era though, you’d have had no choice. It would have been Ribbons for everyone, like it or lump it!

Another problem is around connectivity, though I personally consider this to be smaller issue. SaaS of course requires some sort of connectivity to work in nearly all cases. Sometimes it’s not bad at all: the software may ping back to the Mother Ship every few days, weeks or even months to confirm it’s still okay to run. That’s probably not too big a deal. But if the software runs in the cloud of course then you need a constant connection. In either case though, the issue is that there’s a potential that your software will just not work one day. If you have a report to write for work and all of a sudden Word won’t run because it can’t validate your subscription due to a network outage down the road, well, that’s just too bad for you. As I said though, I view this issue as somewhat of a minor one though because vendors usually have some allowances for this sort of thing that will avoid such nasty scenarios most of the time. Still, the fact that they’re even possible is something to think about.

There’s also a problem of rising costs. While it’s certainly true that a vendor can raise the price of their products at any time regardless of how they’re delivered, there’s a fundamental difference with a subscription in that it’s almost like an addiction: you’re accustomed to having the latest version, you’re work DEPENDS on having the latest version maybe (think of things like file format changes) so you almost have no choice but to just pay the new higher price. Remember too: if you decide not to pay, you can’t just freeze on a current version and continue using it. No, you’re cut off entirely. You either pay or you do without. If it was straight-up purchased software then your choice is simple: don’t buy the new version at the higher price and continue to use what you’ve ALREADY purchased outright. No such choice (usually) exists with a SaaS though.

I’ve touched upon it a couple of times already but now let’s talk directly about a more fundamental problem lurking in this model: for nearly all SaaS subscriptions, if you no longer pay, you no longer have software to use. Period! This is vastly different than purchasing software directly. On the shelf in my office I have a disc with Microsoft Office 2000 on it. Sure, it’s a very old version now, but you know what? It still works! I could load that up and use it right now without any problem because I purchased it outright. I may not WANT to use such an old version, but I at least have that option.

Not so with SaaS. You either pay to use the current version, which is usually the only version you CAN use, or you don’t get access to anything at all (there may be a few subscriptions that DO in fact allow you to use older versions, perhaps even after your subscription has lapsed, but they would certainly be the exception that proves the rule).

But, the problems I’ve described thus far as the pretty obvious ones that everyone realizes. I’m not breaking any new ground here. But, there’s another somewhat more insidious and less obvious problem with SaaS and it may just be the biggest: the vendor no longer has any real reason to innovate. There’s far less incentive for them to improve the product at all.

Think about it: when you have a subscription, you’re in a sense paying for what you already have. Sure, Photoshop may get some new features, but it’s the same essential product. What might Adobe do? Well, they may think: “Gee, people are paying for Photoshop regardless of what we actually do with it… that’s guaranteed income… why don’t we re-allocate our Photoshop development resources to a new product that we can later get people hooked with a subscription on? That’s increased revenue!” You really only have the good will of the vendor to improve the product because there’s no longer a financial incentive to do so and there may in fact be MORE incentive to NOT do so.

All a vendor really has to do is put out a small handful of bug fixes every so often and that’s enough for them to be able to claim that they’re upholding their end of the subscription bargain.

When you purchase a software subscription, you’re basically paying for something sight unseen. You don’t know what’s going to come down the pipe later, but you’re sure as hell paying for it either way. There’s no real obligation on the part of the vendor to even deliver anything more. Oh, you might get a new killer feature that becomes a truly must-have sometimes, but you don’t KNOW that when you click the “Pay Now” button. That’s the gamble you make with SaaS.

And yes, before anyone brings it up: you can usually cancel a subscription any time you want. Except, that misses two truths. One is that in at least some cases you really can’t… if you’re paying monthly then you can maybe argue it’s true, but if you paid yearly? Do you know many SaaS vendors who will refund you the portion of the fee you paid for the nine months you’re now cancelling? That’s true even on a monthly basis, but the damage is considerably less, so people tend not to notice as much. I’m sure there are some cool vendors out there that WILL do that, but again, they’re exceptions. Second, there’s that whole “addicted” aspect of this that is no small matter. Do you really want to go cold turkey and not have access to the software anymore? CAN you do that? If your daily work depends on Office 365 then cancelling, no matter how cool the vendor is about it, isn’t going to fly because you won’t have access to ANY version of the software anymore. That makes cancelling pretty much a non-starter.

So, what’s the bottom line here? Well, if vendors do the right thing then it’s probably that all the positives outweigh the negatives. If they are quality-oriented then they won’t deliver broken updates. If they care about the customers and their products then they’ll deliver new features that people actually want. If they listen to their customers and illicit feedback regularly then they’ll be able to accomplish those goals. As customers, we’ll have the benefit of the latest and greatest at a known and essentially fixed cost. That’s all good stuff.
If they’re bad vendors though? Ugh, we may be in for very hard times.

People need software to work whether professionally or just as a hobby. There’s not really an option of NOT getting into subscriptions at some point. Eventually, that old version of Office 2010 simply won’t run anymore, and I won’t want to be bothered getting an old Windows license to run in a virtual machine. And when I go to buy a new version that will work? Well, SaaS will be my only choice. Subscriptions are only increasing and it probably won’t be long before they are the norm in the software world. Open-source will still be a choice of course, that seems unlikely to ever go full-blown SaaS, and maybe at some point we’ll see a spike in its usage beyond the techie crowd (and beyond a few more mainstream apps). Frankly though, if that hasn’t really happened yet I doubt SaaS is going to change that.

The more likely scenario is people will just get used to the SaaS model. They’ll deal with the headaches because they’ll be sold on the benefits, whether they ACTUALLY outweigh the negatives or not. It all comes down to marketing, doesn’t it?

But, there’s no question in my mind that we’re starting down a path of leaving behind a model that worked for many years. Maybe this is just changing times and we’ll all have to adapt, or maybe it’s a mistake. It’s certainly not a mistake for the vendors, that much is clear. Do customers lose out as a result? There’s certainly cause for concern I’d say.

Now, if you’ll excuse me, I have to go renew my Office 365 subscription 🙂