Wednesday, September 30, 2015

Making the Most of Your Device’s Battery

There seems to be a lot of misinformation out there about the best way to care for the battery in your cell phone, laptop, tablet, or other electronic device.  It seems that most people have not been given proper instructions on how to best care for their batteries, and they end up wearing them out prematurely.  By taking care of your battery, you can make your device perform optimally for years.
Technology has changed quite a bit over the years, that’s for certain.  And so have the batteries that power our devices, and the chargers that keep them running.  Unfortunately much of society hasn’t been taught how to care for them to get the most out of them.  So let’s set the record straight.

Myth: I should let the battery on my device drain all the way down before charging it again.

Fact: This was true in the days we used NiCd rechargeable batteries in our devices.  Very few devices still use NiCds; they are heavy and hold relatively little energy.  Today, we use Lithium Ion batteries, and draining a Li-ion battery shortens its life dramatically.  In fact, in some cases when a Li-ion battery is drained all of the way it won’t accept a charge at all.  Bad things happen to Li-ion batteries when they are allowed to get too low.

For example, if a Li-ion battery is allowed to fully discharge, it will only accept a few hundred charges before it dies.  If a battery is only allowed to dip to 90% charge each time it is used, it will be good for many thousands of charge cycles.  A properly cared-for battery can last for many, many years.  A battery improperly cared for can become useless in under a year.

Myth: It is bad to leave my device plugged in all of the time.

Fact: For devices with really primitive charging circuitry, this is actually true.  These devices would overcharge a battery, and damage it. 

But those days are behind us.  Any modern cell phone, laptop, or tablet has intelligent charging circuitry that shuts off the charger when the battery is full, eliminating the need to unplug when the battery is charged.  You don’t need to unplug manually.

You may even see evidence of the intelligent charger.  If your device’s battery charge actually drops while plugged in, this is the intelligent circuitry doing its job, turning on and off to prevent unnecessary wear and tear.  Most devices hide this on/off cycle from you, though, so even devices that stay at 100% when plugged in are still managing your battery properly.

Myth: It doesn’t matter when I plug my device in, the battery is going to wear out in a couple years anyway.

Fact: Batteries actually do have a limited number of charge cycles that they can handle.  And each charge cycle holds just a little bit less energy than the previous.  But the loss in total capacity can be minimized by making sure that batteries aren’t drained any more than they need to be.  The way you handle charging your device can extend or shorten its life significantly.  Deep discharges wear out a battery faster than letting the battery drop just a few percent before plugging it back in.  To maximize the life of your battery, just plug in whenever you can.

Myth: It isn’t good for a battery to only let it discharge a little bit before plugging it back in.

Fact: The Lithium Ion batteries that power our devices actually last longer when they aren’t allowed to discharge much.  They last longer when their charge isn’t allowed to drop.  They “like” to be constantly topped off.  The old NiCd batteries we used years ago worked best when discharged fully before charging, but the Lithium Ion batteries we use today wear out faster when allowed to discharge.  So plug in to keep your devices topped off whenever you can.

Myth: Lithium Ion batteries are dangerous, and can explode, especially if overcharged.

Fact: Lithium Ion batteries are potentially dangerous.  If allowed to overheat they can catch fire –violently – and even explode.  Fortunately, reputable manufacturers place multiple failsafes into modern batteries to prevent this from happening.  The number of cases of batteries overheating or exploding has dropped dramatically in recent years.

But because batteries have to be designed and built properly to prevent overheating, fires, and explosions, you should avoid purchasing no-name aftermarket batteries.  You just can’t be sure if they’re built with the same level of protection as batteries from the original device manufacturer.  It just doesn’t pay to buy batteries from brands you don’t know you can trust.

Myth: All Lithium Ion batteries are the same, so it doesn’t matter if I buy a cheap no-name replacement.

Fact: Batteries are most definitely not all created equal.  Aftermarket batteries often hold less of a charge than the originals (even when labeled as if they held more), and very often aren’t built with the same level of protections against fire and explosion.  They also tend to wear out faster.  It generally isn’t worth it to buy batteries from anyone other than the original device manufacturer, or at least a trusted brand. 

Myth: The battery in my device can’t be replaced.  The cover can’t be removed.

Fact: We have certainly seen a trend in recent years for device manufacturers to take away the ability for owners to swap out a battery by removing access covers.  But in most cases, batteries can still be replaced by a qualified service center.  Don’t be tempted to throw away an old phone just because it doesn’t hold a charge very well.  Replace the battery and keep using the device, or donate it to someone else who can enjoy it.  (Reusing is better than recycling, and far better than discarding.)

Myth: It’s okay to use an aftermarket charger.

Fact: It depends on what type of charger you’re talking about.  If you’re talking about a charger that you plug into a phone or tablet, it may not matter what charger you use in terms of the life of your battery.  But if you’re talking about a charger that you insert a loose battery directly into, it can make all of the difference in the world.  Cheap battery chargers don’t often have the intelligence that they need to maintain a battery properly.  Stick to chargers from the original manufacturers, or at least a well-known and well-respected brand.

Myth: Using a charger with a higher milliamp rating than the original will damage a device/battery.

Fact: The milliamp rating on a charger is simply the maximum amount of current that it can potentially put out.  It doesn’t mean that it will force more current into a device than it can handle.  If a device is designed to draw 500mA, and you plug it into a 1000mA charger, the device will still draw just 500mA.  It is generally just fine to use a charger with a higher milliamp rating, so long as the voltage is correct.

Myth: I should never allow my battery to drain fully.

Fact: Okay, well, yes, you should never drain the battery all the way until your device shuts itself off.  That is bad.  But it is a good idea to drain your battery down to 10% or so a couple times per year.  Not because doing so is actually good for the battery, but because it is actually good for the device it is powering.  It is quite difficult for devices to figure out the charge level of Lithium Ion devices (it involves a lot of guesswork), and putting a device through a discharge / recharge cycle gives the device a chance to re-learn how your battery is operating.  You’ll be rewarded with a more accurate gauge of the amount of battery life you have left.

Myth: It isn’t worth it to do anything to improve the battery life of my device.

Fact: Because draining a Li-ion battery is bad for it, you can extend the life of your device’s battery by taking a few steps to reduce the amount of battery charge being used.  Things like changing the amount of time a device sits idle before automatically going to sleep, reducing the brightness of your screen, using Wi-Fi instead of a cellular connection, or closing apps you aren’t using can make a huge difference, and can extend the life of your battery dramatically.

Myth: It is okay to throw away a used battery in the trash.

Fact: Nope. Lithium Ion batteries should always be recycled.  It is easy to do; most electronics and office supply stores will recycle old batteries for you at no charge (pun intended).

Myth: Batteries perform differently based on temperature.

Fact: This one is actually true.  A warm battery doesn't output as much energy as one at room temperature.  Likewise, a cold battery doesn't output as much as one at room temperature.  Batteries operate most ideally at the same temperatures that we as humans do.

Similarly, batteries charge best at room temperature as well.  A cold battery won't charge as fast as one at room temperature.  And trying to charge a hot battery isn't a good idea.  So if your device is too warm or too cold, give it some time to return to room temperature before plugging it in.

Batteries which become too warm are also damaged by the heat.  A battery that overheats because the device is in the sun, or is hot because the electronics inside have gotten warm, can easily be permanently damaged.

Myth: It's okay to use a battery which has swelled up.

Fact: A battery which has been overcharged or overheated can sometimes swell up and become larger than it is intended to be.  These are potentially dangerous to use.  The act of swelling up can damage some of the protection circuitry inside.  Once a battery has swelled it should be properly recycled and replaced.  There is no way to repair a swelled-up battery.

Myth: You have oversimplified how to care for a battery here.

Fact: Okay, yes, I have oversimplified a bit.  I'm aware that my advice isn't 100% accurate.  I'm aware that modern electronics do push batteries harder than they maybe should.  But I feel my advice is still good because actual battery best practices are too complicated and nobody would ever actually attempt to follow those rules exactly.  We aren't NASA producing devices that have to survive in space for a decade.  Nobody would be happy with the battery life of their devices if they followed actual best practices, nobody would take the time to monitor their devices that closely to maintain them perfectly, and any potential damage done by following my advice compared to ideal is for all practical purposes insignificant.  Device owners can benefit significantly from the advice here compared to how they are likely handling their devices now.  So I've opted to simplify the rules to make them easier to follow. So please forgive me for not over-complicating the matter.

Tuesday, August 11, 2015

Why I Don’t Buy Digital Movies

With the availability of iTunes and other digital video services, I hear a lot of people talk about how they don’t buy DVDs any longer.  I hear things like “I don’t want to take up space with all of those cases” or “my kids destroy DVDs” – which make sense, but at the same time I can’t bring myself to give up my physical media.

For me, though, digital video distribution (DVD?) plays a supporting role rather than the primary role in building my video collection.  I don’t purchase movies digitally – I buy the discs.  Almost always Blu-ray discs, actually, since normally when I watch movies they’re being projected on a 100” screen, and DVD can fall apart at that size.  So do streaming services, to some degree, as well, but this isn’t the reason I choose not to invest in digital.  It’s more basic than that.

The main reason is that I don’t trust that these services are going to be around in ten years.  And I don’t want my investment to be lost.

History already tells us that we can’t rely on these services, no matter who is backing them.  Several big players have already tried and failed, including Wal-Mart and Target.  And when they fail, you lose what you’ve bought.

I know what you’re thinking… that Apple’s iTunes isn’t going to go away.  Maybe not.  At least not now.  But can you actually believe that Apple, if they’re still around in 20 years, is still going to be supporting a service that old?  They don’t support any services more than a few years old now.  There’s just no way that they’ll actually still make your movies available to you that far in the future.  Technology changes too fast.  Twenty years in the technology world is an eternity.  Very few tech companies make it that long. 

Owning the discs ensures that I’ll be able to watch them 10, 15, or more years in the future.  Even if (when) manufacturers stop making Blu-ray players in the future, the players I own today will still play those discs moving forward.  Yes, we’ll see improvements in picture quality with new tech like 4K and HDR moving forward, but Blu-ray is pretty good – it’s virtually the same level of quality currently projected in your local theater – and many movies have actually been shot in HD-like resolution, so in those cases a higher quality version usually doesn’t even exist.  And unless you’re sitting really close to very large screen, newer technologies won’t even provide any additional discernable picture detail. (Though HDR, if it catches on, has the potential to improve things considerably.)

The other big reason I still buy discs is convenience.  I don’t want to be without a way to watch a movie if my Internet goes down, I’m travelling somewhere where I don’t have Internet access, or it isn’t fast enough to stream a movie reliably.  Maybe in 5-10 years our Internet access will be more reliable and high speed will be more ubiquitous, but I just can’t count on it.  And will the streaming service you’ve invested n still be around at that time?  There’s no way to know.

That said, it isn’t like I don’t use digital video services, because I do.  They’re just my backup.  Most movies I buy come with a code to unlock digital versions.  And if they don’t, I’ve really found Vudu’s Disc-to-Digital program to be very handy.  (Tip: If you use the service, do the conversions at home on your own computer, and convert more than 10 discs at a time for a 50% discount.) I can’t convert all of my movies to digital, but I can certainly convert enough of them that I’m generally not left wanting when I want to stream a movie. I’ve got 241 on Vudu right now, so I’ve got plenty to choose from.

In any case, I know that everyone’s situation is different.  But I would encourage you to think about the future when making your video purchases.  Would you care if your selected service shut down in 5 years?  Would it bother you if you lost your investment because they’ve gone belly-up, or choose not to support it any longer?  It’s something to consider.

Tuesday, October 14, 2014

Why Web Sites Don’t Need to Store Your Password

It seems counterintuitive, but web sites that require logins don’t actually need to store your password.  And they actually shouldn’t – it is a very bad idea to do so.   We see too many leaks of account databases for it to ever be safe to store passwords in any form, even if encrypted.

So how can a site validate a login if it doesn’t store the password?  The answer is something really cool called a hash function.  I know your eyes just glazed over, but bear with me, the concept is actually simple.

A hash function is a way of processing data that is one-way… you can put data in, and always get the same result coming out, but there is no way to reverse the process to get back the original data.  I won’t get into the specifics of how hashes actually work, but I can describe a very simple hash that will illustrate the principle.

Say, for the sake of simplicity, we are creating a web site that uses a 4-digit PIN as a password to log in.  We know that storing the PIN itself is dangerous because it could be leaked out or viewed by site administrators, so instead we add up the four digits and store that sum.  So if my PIN is 2468, we store 20 (2+4+6+8) in the database.  When we go back to the site to log in, the site can add up the four digits we enter for the PIN, compare that result against the sum in the database, and validate that we know what the correct PIN number is.  A hacker that gets his hands on the database only knows that the sum of the digits is 20… he can’t possibly know that the original PIN was 2468.  They’d have to guess what the original PIN number was by trying different combinations.

Of course this is overly simplified.  This demonstration hash function wouldn’t really work in the real world because it is too easy to figure out combinations that would let hackers in.  This situation is called a collision… 8642, 5555, 8282, 1991, and 6446 all produce the same hash value of 20.  But real hash functions used for account login verification are much, much more complicated, and aren’t normally subject to problems with collisions.  But you get the idea.  Instead of storing the actual password, we store a value that is calculated from the password.  We can validate that someone knows the password without actually storing that password.

This has other advantages as well.  For example, using a hash function there is no limit to the length of the password, because hash result values are always the same length regardless of the amount of data going in.  Someone could enter 6 letters, or 200 random symbols, and either one can be hashed down to a value of a standard length that can be stored in the database. 

Because of this, you can sometimes tell web sites that don’t use hashes to securely store passwords because they enforce a maximum length for passwords.  This isn’t always the case, but it can be one indicator that the site’s security has been poorly designed.  But if you are signing up for an account on a web site and they have a low limit on the length of the password, like 12 characters, you might look for other signs of poor site security or privacy policies.  And definitely don’t reuse a password from another site.  Or just steer clear.

The down side to using hashes is that if you forget your password the site has no way of sending it to you… because they actually don’t know it.  That is why sites generate a brand-new, random passwords that they send to you via email when you forget your password.  They honestly have no idea what your password was, so the only solution is to create a new one and use that temporarily until you create your own.

The whole process is considerably more complicated than I’ve described here – or at least it should be.  Just using a hash isn’t sufficient, either, because we’ve got affordable computers these days that can calculate billions of hashes per second and are therefore capable of brute-forcing short passwords very quickly.  (A 6-letter password, for example, would be cracked hundreds of times over in just one second using a simple hash).  But for a site to use a hash on passwords is one step in the right direction.

Saturday, October 11, 2014

Canon vs. Nikon vs. Sony

We’re all familiar with the expression “the grass is always greener on the other side of the hill.” This applies in many areas of life.  And, of course, that means photography. 

I’m primarily a Canon shooter.  I use a Canon 6D as my primary camera, with several other bodies for backup or other shooting situations.  I’ve currently got 5 working Canon DSLRs, as well as three film bodies, and I’ve amassed quite a large collection of lenses, flashes, and other gear as well.  And I’ve been very happy with all of it.  But sometimes you start to doubt your choices when you start reading articles online about how Nikon’s and Sony’s cameras are capable of producing images with more detail, greater dynamic range of bright vs. dark, and a wider range of colors.  Did I choose the wrong brand?  Am I making a mistake by sticking with what I’ve got?  Or should I sell it all and switch?

So I’ve spent a bit of time reading up on what the advantages and disadvantages of the different brands are.  I even bought a Nikon camera and couple of lenses so I could see what they offer.  I’ll save my conclusion for the end, so bear with me for a bit.

I’m making all comparisons between similar models… so, for example when I make a statement about a feature, I’m referring to competing models between brands… I won’t compare features on high-end models of one brand to low-end models of another brand.  I’m trying to be as objective and honest as I can be.


If I were to go by specifications alone, both Nikon and Sony produce camera bodies that have more detail in terms of resolution, dynamics, and breadth of colors.  The numbers are pretty clear on that.  As far as Nikon goes, they’ve stuck with the more traditional SLR design, with an optical viewfinder and reflex mirror that moves out of the way of the sensor when shooting an image, whereas Sony is producing basically all mirrorless designs, relying on electronic viewfinders.  I won’t really get much into the reflex vs. mirrorless debate here, but I do prefer the optical viewfinder because of its significantly higher resolution and lack of delay.  Someday mirrorless designs may make up for those issues, but as someone who usually shoots with manual focus, the highest resolution viewfinder is essentially a must-have for me.

In terms of autofocus ability, each brand has standout models.  I don’t really believe that any brand has an inherent advantage over another.  Having used both Canon and Nikon bodies, I prefer the way that the Canon models work.  Especially in low-light situations.


As of today, Sony probably has the advantage of the best looking video when comparing models with similar feature sets.  Canon is the other standout here, with its pretty amazing DualPixel autofocus on the 70D.  Both Nikon and Sony produce images with more detail.  Nikon still seems to have trouble with the “Jello” effect more than the other two brands, though they have gotten better.  Certain Canon models have more moirĂ© issues than the others, so that needs to be considered as well.


Here’s the make-or-break for me… whatever brand I go with has to have good quality lenses, and a wide variety of them, at affordable prices.  I’ve found that sticking with OEM lenses usually gets you the best results when compatibility, affordability, and autofocus are taken into consideration. 
So here’s the bottom line… Sony’s selection of lenses pales in comparison to both Canon and Nikon.  The difference is huge.  There are less than a dozen lenses for the Sony “A” series, which is really the only line I’d potentially be interested in.  So, for me, Sony is out.  They have some amazing lenses, but being limited to just a few (especially considering their cost) isn’t viable for me.  For people without sophisticated lens needs, and significant budgets, Sony could be a great choice.  I use a really wide variety of lenses, especially primes.  I really don’t think I’d be able to give that up. 

So I’m back to the traditional Canon vs. Nikon debate.  What I’ve found, though, when researching this (primarily on, though many YouTube review videos are being taken into consideration) is that unless you’re willing to spend a lot of money on Nikon lenses, that Nikon’s image quality really suffers relative to equivalent Canon lenses.  Nikon produces just a handful of lenses that autofocus on the less expensive bodies under $1000 that are rated to give more than about 10 megapixels of resolution, whereas Canon has a lot to choose from.  Comparing Canon to Nikon lenses, in almost every case the Canons do better in terms of sharpness.  Which for me is the most important thing.  I don’t want to spend time taking images only to come home and find out that they are always soft.  It is especially true with prime lenses, where Canon has a huge advantage.  Canon’s lenses often resolve nearly twice as much detail as the Nikon equivalents.

Take the Nikon AF 50mm f/1.8D vs. the Canon EF 50mm f/1.8.  The Nikon gets a 8 MP score for its sharpness, whereas the Canon gets 14MP.  And the Canon is cheaper.  And it autofocuses on all bodies, not just the high-end models like the Nikon (Nikon “AF” lenses do not autofocus on the D3xxx or D5xxx series of cameras – you have to step up to “AF-S’' lenses or a more expensive body for that).  The difference in performance between these two lenses isn’t at all atypical comparing equivalent models. 

To be fair, Nikon also offers a 50mm AF-S f/1.8G lens, which does autofocus on all bodies, and gets a 15 MP score, but it is more than twice as expensive as Canon’s ($220 vs. $100).  And it is the only one of a few primes in Nikon’s lineup under $1000 that gets a score over 10 MP.  Every one of Canon’s prime lenses scores 14 MP or higher.  Performance with kit lenses included with camera bodies is similar… Canon’s are all better.  For all of the love that Nikon gets from its owners, I was shocked at the difference.  And choices on the Nikon side become much more scarce if having autofocus on a lower-end body is a requirement. I think there are only two AF-S Nikon primes under $1000 able to resolve 14 MP of detail or better.  Canon has over a dozen.

One could argue that you don’t have to go with OEM lenses.  And that is true.  My own experience with third-party lenses, though, has been disappointing.  Not necessarily in terms of image quality (though they do often lag behind), but of build quality.  Every third-party lens I’ve ever bought has broken on me.  Every single one.  But I’ve never had anything go wrong with any of my OEM lenses.


So what does it boil down to for me?  I’m sticking with Canon.  Having cameras with the best available sensors would be awesome, but if the options for the glass to put in front of it aren’t as good, I’m afraid I just couldn’t make a switch.  It would be nice if you could put Canon glass on front of a Nikon, but without complicated adapters which inherently have to reduce image quality that just isn’t possible.  Or if I was insanely rich and could afford boutique lenses, the story would probably be different.  But I’m very much on a budget, so I’ve got to stick with more affordable choices for now.  And for today, that still means Canon.

So it boils down to this: Nikon’s choices for someone who likes to shoot prime lenses with the highest quality image are weak compared to Canon.  And Sony doesn’t even show up for that contest.  Those are the deciding factors for me.

I know that there are going to be a lot of people upset with my conclusion.  And they’ll even use DxOMark’s data to try to make their point.  Keep in mind that I’m making my decision based solely on achieving the best quality image while keeping lenses affordable.  If budget goes out the window, then the decision very likely could be different.

Sunday, February 16, 2014

Best Kept Secret in Technology

Every once a while a technology product comes along which is just an absolute bargain.  And very often those bargains are unknown to the general public.

The one that I want to tell you about today is the Nokia Lumia 520 (or 521) smartphone.  I’m sure you’re thinking, “but I already have a smartphone!”  But I’m suggesting this not as a replacement for your current smartphone, but rather something that is neat to own in addition to your smartphone.  But it would be a great thing to own for anyone who doesn’t already have a smartphone of their own.

Most of the time when you buy a cell phone you have to buy it with a contract, or pay out the nose for it up front.  Most smartphones, if you buy them outright, will cost $500 or more, and if you don’t pay that out-of-pocket it is figured into your monthly bill one way or another.  The Lumia 520 and 521 are inexpensive (both are easily less than $150) and don’t require you to sign a contract or even activate the phone.  But why would you ever do that?

Well, consider all of the things that people like to do with their phones… browse the web, check for email, listen to music, watch videos, play games, get driving directions.  Imagine being able to do all of that without a monthly payment.  Zero.  None.  No contracts, no monthly payments, ever, unless you want to.  That’s what’s great about these two models of phone.

A few scenarios…

Much of the time when you want to listen to music, it is music you already own – you don’t need an active Internet connection to stream it.  Maybe you have an iPod Touch that you listen to music on.  But those start at $229.  The Lumia 520/521 play all of your music just like the iPod Touch does – and in my opinion does a better job of it.  And they are a lot less.  And with an iPod, if you run out of storage you have to buy an entirely new device.  With the Lumia 520/521, if you run out of storage you can buy a Micro SD Card (up to 64 GB) and pop it in.  The Lumia 520 + a 64GB of storage is less than half the cost of the cheapest iPod Touch.  And it has an FM radio too, which the iPhone does not.
Music + Videos Hub
Now say you want directions from A to B.  Yes, I know that smartphones already do that.  But to do that they nearly always require Internet access and a data plan.  Because the Lumia 520/521 runs Windows Phone 8, you can pre-download maps (state-by-state or country-by-country) at home over WiFi before you leave, and store them on the device for use even when you don’t have Internet access.  You get door-to-door directions, like a dedicated GPS unit, for a lot less than a dedicated GPS unit.  And unlike the budget GPS units, it even knows how to pronounce street names so directions are specific – “turn right on Juniper Avenue” instead of “in 300 yards, turn right.”  If you do activate the device as a phone or tether it over WiFi to a smartphone or tablet, you even get up-to-the-minute traffic information, so it can route you around problems.  And I actually believe that Nokia Drive is the best navigation software out there for any smartphone.  It’s fast, accurate, and touch-friendly so it works great in the car, and best of all, it’s totally free.  And since it doesn’t require a data connection, it works in the middle of nowhere when your cell phone won’t.  (Nokia, incidentally, owns Navteq, which easily has the best map data anywhere – easily besting Apple [cough] and Google – and this is where the map data for Windows Phones comes from.)

Watching movies is easy too.  Since you can pop a Micro SD card in, you can store a lot of video for the kiddies to watch in the car.  It isn’t the biggest or best screen, but it’s more than adequate.  And at 800x480 pixels, a lot higher resolution than you’d get from an Android device in the same price range.  Most of those are 320x240 – or maybe VGA if you’re really lucky.

Say you’ve got a kid that is bugging you about wanting an iPod Touch or iPhone to play games on, but you’re not excited about the cost.  These two Nokia phones do an excellent job of playing games.  It’s true that you won’t get the same selection of games you get on an iPod, but you also aren’t shelling out a ton of money for something that is probably going to get lost, broken, or stolen and have to be replaced over and over.  If one of these phones gets lost or broken, it isn’t that big a deal because they’re so inexpensive.

Games Hub
And of course whenever you’re in range of WiFi you get all of the benefits of a smartphone that you’ve come to expect.  It will check your email (best email client on a smartphone I think), it will browse the web (not the best browser, but certainly more than serviceable).  And play games.
So why a Windows Phone?  Well, because in this price range nothing else comes close.  Apple doesn’t make an i-device for less than $200, and anything in that price range running Android is just, well, a downright ugly experience.  The 520/521 might be the slowest Windows Phones out there, but they aren’t slow.  They feel very fast.  They’re certainly a lot faster than anything running Android at three times the price, and faster than any Apple device more than a year old.  And they don’t feel cheap like many similarly priced devices do.  They feel well built so they should hold up to the abuse that you or your kids throw at them.

The only difference between the two is that one is sold by AT&T and the other is sold by T-Mobile.  You don’t have to have an account with either carrier to buy one – just order it from Amazon or pick it up at Wal-Mart.  As of this writing, the Lumia 520 is only $59.99 at Amazon, and the 521 is $119.99.  Again, you don’t sign up with the carrier if you don’t want to.

These two phones are absolutely the best deal on technology out there today.  You get the functionality of a good smartphone at a tiny portion of what it would cost you to get it otherwise.  Nothing else even comes close right now.

The one thing to note is that these phones are locked to either AT&T or T-Mobile.  Which means you can’t just pop in a SIM card from the other carrier and have it work.  If you want to use one as a phone, only AT&T SIMs will work in the 520, and only T-Mobile SIMS will work in the 521.  So if you want to have one as a backup phone, buy the one that is tied to your carrier.  But, again, you don’t have to be (or become) an AT&T or T-Mobile customer.
They also only come with 8 GB of storage.  So you probably will want to consider getting a MicroSD card for additional storage.
Is this the perfect device?  Certainly not.  But for the price, nothing else even comes remotely close.
Bonus tip: If you do happen to be a T-Mobile customer, go to their web site or one of their stores and sign up for a free tablet account, even if you don’t have or plan to buy a tablet.  You get 200 MB of 4G data every month at no cost (and if you go over that data allotment they just slow you down – there are never any overage charges).  You can then use that SIM card in the Lumia 521 and use it to access the Internet on the phone without paying for a phone line – you won’t have to pay a dime in service charges, ever.  You won’t be able to make phone calls (unless you use an app like Skype over the 4G connection), but you can do everything else you'd be able to do on a smartphone, and it won’t cost you anything to do so.

Thursday, August 22, 2013

Software Development: Old School or New School?

Since I started writing software when I was 5, I’ve been doing it a long time.  I’ve seen a lot of changes in the technology – from the BASIC language all the way to assembly, desktop to server, fat client to thin client, you name it.  But the trend I’ve seen over the last 10-15 years is troubling.

There is absolutely no question that the Internet has changed things radically.  Much of that change is good.  There is, however, an aspect of the Internet and the way that software is written that is disturbing.  Many of the time-tested, well-thought-out, efficient ways of coding are disappearing and are being replaced by junky, ill-conceived, incredibly inefficient substitutes.  People that are learning to code now are mostly coding for the web, and it is very upsetting how little they understand of the actual science behind computing, mostly because the software development tools in use today don’t even support the best, time-tested concepts.

As part of my job I do software development in both Delphi (a modern-day variant of Pascal, very similar to Microsoft’s C#) and PHP.  Delphi is extremely efficient, and has adopted most of the best ideas that have ever come along in computing.  PHP is at the opposite end of the spectrum – extremely inefficient, lacking support for many of the most basic tools that real high level languages offer.  If you start to talk about JavaScript (the programming language that powers web browsers) the situation is even more dire – it is far more basic than even PHP.  Yet nearly all of the hype you hear in development is around HTML 5, JavaScript, Java, and PHP.  All of which are, frankly, very immature, and are evolving at glacial pace.

One of the technologies that is falling by the wayside is object-oriented programming.  It allows developers to create virtual objects that you can copy, manipulate, and act upon extremely easily and efficiently.  Java is object oriented, but it has other problems of its own (efficiency and security being the main two) that are causing it to fall out of favor quite rapidly.  PHP has some support for objects, but frankly it’s pretty terrible.  HTML and JavaScript don’t even attempt to support it at all.  People that are learning to program now don’t seem to have any kind of understanding of how much easier their lives would be if they had access to object-oriented development tools.  And the situation is actually getting worse, not better.

Another concept that is lost on the web is code compilation.  Pretty much ever since the dawn of computing, developers take code and run it through a compiler to produce the set of instructions that are native to their computer so that they don’t have to be translated at the time the software is run.  Consider how much more efficient you are at speaking your own language than you would be at trying to converse in Korean by using a dictionary, having never heard or seen a word of Korean before.  Compiling does the translation ahead of time (just once) so that software runs as quickly as possible.  Yet again, web technologies don’t do compilation – they do the “translation” at the time that code is executed, making things incredibly slow in comparison.  In addition to that, since the translation is done at run-time, that means you have to distribute the actual source code (the original code you’ve written) to your software in order to run it… so anybody who wants to could take your code and modify and redistribute it… or in cases where you’ve got content you want to protect, like music, or a movie, everybody can see exactly how it is protected so that protection can be removed.  Java has the ability to do a sort of rudimentary compilation just before code is executed, but it is still far from true native code, and it still slows you down considerably.

It’s almost like about 15 years ago people said, “We don’t care about all of the research and learning that has occurred over the last 50 years.  We’re going to come up with a new way of doing things, no matter how good some of your ideas may be.”

As someone who works in both worlds it is incredibly frustrating.  Especially when I have to interact with people who have only ever spent time in the newer web technologies, because they don’t even have a remote concept of what they are missing out on.

There are a ton of other great technologies that seem to be falling by the wayside.  True code debugging (the ability to see what is happening internally inside of software as it is running, making testing much, much easier) is extremely rare.  RAD (Rapid Application Development), once considered the epitome of efficient design and coding, is almost unheard of today.  True integration with databases is pretty much gone too, and in its place are incredibly difficult-to-program, very bloated communication methods that making coding difficult, especially if it is to be done securely.  Forgive me if fname.value=’Frank’ is easier (and conceptually much more sound) than “UPDATE users SET fname=’Frank’ WHERE userid=56”, but this is exactly the sort of difference I’m talking about.  For the most part web developers aren’t even remotely aware that the tools we had for doing things were much better than the best of what they have access to today.  It’s really quite sad.

I’m not saying for a minute that these newer tools don’t have a place.  They do.  But very little, if anything, is being done to improve the tools and incorporate the lessons that 70 years of computing science have taught us.  There’s almost a wall there where anyone who works in the newer tools will automatically dismiss ideas from the old school just because they are old school, not because there is any real reason to do so.

So I have to admit that I don’t really having to work with HTML and JavaScript and PHP.  They all seem incredibly antiquated to me.  Almost like I’m stepping back in time 30 years.  In many cases it is much harder to do things in the “modern” tools than it was in the contemporary tools of the early 1980s.  Things that I’ve taken for granted in what I would call a “real” development environment just don’t even exist when working with their “modern” counterparts. 

Would you enjoy having your Ferrari swapped out for a Model T?  And somehow I’m expected to like it.

The result of all of the backwards ways of doing things with “modern” tools is that it takes forever to get anything done.  I can easily write “equivalent” code in Delphi five times faster than it can be done in PHP even though at this point I probably know PHP as well as anyone could.  And, on average, it takes about half of the lines of code in Delphi to accomplish something as it does to do the same thing in PHP.  And yet the Delphi code literally executes more than a hundred times faster, and provides a better user experience.  Yet somehow people are critical of my decision to continue to use such a tool.  Only because they don’t understand it, and in most cases refuse to even try.

Much of the stagnation in web technologies is due to the bickering and in-fighting that happens between companies that build tools for the web.  HTML 5 is, in reality, very poorly suited for what we are asking it to do today.  And everybody involved wants their own ideas for improving it to become the standard, but nobody else is willing to adopt those ideas because they aren’t their own and they can’t profit from it.  In the 1990s and early 2000s, for example, Microsoft tried to extend HTML 5 with new features in Internet Explorer and they got shot down by everyone else, because they weren’t “following the standard.”  Well, yeah, they didn’t… because there wasn’t a way of doing the things they wanted to in the standard.  Yet when people do actually get together to try to improve the standard, nobody can agree on anything so nothing gets done.  We’ve been talking about HTML 5 for nearly ten years, and it is still so poorly supported across different browsers that you almost can’t use it.

Trying to creating interactive web pages is a an absolute disaster – programmers have to take care of every low-level event (click button, move mouse, release), and those events differ from browser to browser.  Want to play music or show video on a web page?  Nobody can even agree on how to do that so you have to produce three separate versions of every file, then figure out which version to use when you view the page.  HTML wasn’t ever even designed to handle any multimedia other than graphics, either, which is why so many web pages use Adobe Flash, despite the fact that everybody hates it.  Want to do things like drag-and-drop?  Good luck.  It’s really hard to do, and usually has to be coded multiple different ways to work in all popular browsers.  But in my ‘old school’ Delphi drag and drop doesn’t even require writing a single line of code.  Just set an object property saying ‘yes, you can be dragged’ and ‘you can accept dragged objects.’

Adding database interactivity to a web page is an exercise in patience and frustration.  There still isn’t an official way for a web page to pull data from (or insert data into) a database.  It’s still a very tedious and time consuming thing to do.  Don’t even get me started on how nobody does it securely because that’s even harder to do.  But we’ve had databases for 50 years so basic interactions like this should be a cakewalk.  In Delphi, all I have to do to retrieve record 56 from the users table of the database is users.FindKey([56]).  The same thing in PHP is at a minimum of 4 lines of code – much more if you do proper error checking.  And in JavaScript?  Well, don’t plan on working on anything else that afternoon.

It goes on and on.  Want a web page to interact with a Joystick on the web?  Not happening.  Or generating output for a printer with full control over how it looks?  Again, not really possible.  How about photo editing?  Not very plausible in HTML.  How about a page that uploads a picture to your cell phone over USB?  Nope, HTML doesn’t allow it.  And it will likely be at least a decade before such things are actually possible and usable.

All of the above problems had already been pretty much solved by traditional development tools long ago. 

And somehow many of the companies that have produced the strongest tools and environments for software development in the past are abandoning the more mature technologies.  Microsoft is trying to force everybody to the write Windows 8 apps, despite the fact that this environment, too, is missing some of the best things from their traditional desktop environment.  Apple invests very little in desktop technologies.  And Linux stagnated years ago.

It’s really pretty sad.  If people were smart they’d take the best ideas from wherever they come from instead of trying to reinvent the wheel over and over.  And as it stands today, the technologies that power the web – HTML, JavaScript, etc. – are more of a wooden, square wheel than most developers realize.  The traditional ways of doing things don’t have to be left behind – they could easily handle the same tasks that the newer technologies are doing, and in most cases do a far better job of it.  Or, some of the concepts from traditional development could be added to the newer tools.  But, for some reason, never the twain shall meet.  It’s frustrating having to choose between high functionality, quick development, and high performance, and working on the Internet.  It would be really nice to be able to do both.

Saturday, May 25, 2013

Cameras–Is it time to upgrade?

One funny thing that happens to me a lot is that many people I know outside of work seem to think that I do audio, video, and/or photography for a living.  My job is in software development, but that is apparently less glamorous than multimedia to the general public, so for some reason I'm known better to people in my personal life for the things that I like to do with media rather than creating software.

So one of the questions I often get asked is “which camera should I buy?” Or the same question phrased differently, “should I get a new camera?”

For some reason nearly everyone interested in photography gets stuck on camera technical specifications. For example, the first question people ask me about one of my cameras is “how many megapixels is it?” when in reality that number doesn't really mean much of anything these days, as I'll discuss later.
So in attempt to sort of pacify everyone, here are some general guidelines on what cameras to look at, and whether you should upgrade your existing camera to something newer or more expensive.

Digital SLR

First, advice for people who already have a digital SLR camera and are thinking about upgrading…

You probably don't need to upgrade if…

  • Your camera has a resolution of 6-8 megapixels or better, and you do nearly all of your shooting outdoors during the daytime.
  • Your camera model was released during or after 2009.

You may want to consider upgrading if…

  • Your sensor resolution is less than ten megapixels, you do a lot of cropping on images, and you create large prints.
  • You shoot at night or indoors a lot, and for whatever reason don't want to use a flash or a large aperture (f-stop less than 2.0) lens.
  • The limitations of your equipment are preventing you from getting the shots you want.


While most digital SLR cameras released in the last 10 years or so are capable of really good pictures during the daytime, many models released before 2009 struggled to perform well in low-light situations.  Then in 2009 something magical happened, where all of a sudden cameras from all manufacturers were being released with better clarity and low-light sensitivity with much higher usable ISO settings.  If you shoot in low-light (such as indoors or at night) having a 2009-model or newer camera can make a big difference.
If you shoot primarily in daylight, or with a flash, or a large aperture lens, you probably don't need to upgrade.  Even early model cameras going back to 2004-2005 still do really well in these situations, and you wouldn't gain much by moving to a newer camera.

If you really have an itch to buy new camera equipment, lenses are always a much better investment than electronics.  A good quality lens will make a bigger difference in picture quality on an older body than a cheap lens on a newer, more expensive body.  And lenses hold their value really well – oftentimes you can resell a good lens for the same price you originally paid, or take just a minimal loss on it.  The value of anything electronic, especially digital camera bodies, plummets very quickly.

What should I get?

Even the most inexpensive digital SLRs take amazing pictures these days, and most models released since about 2010-2011 shoot pretty spectacular video as well (as long as you are willing to focus manually).  Unless you have a very specific need for a higher-end model, the cheaper (and usually lighter and smaller) bodies make a lot of sense.  I own several SLRs, and when I want to take a camera with me that isn't too big or bulky, I take my 2010-model Canon T2i because it is small, lightweight, and takes fantastic pictures.  I only use my bigger and bulkier SLRs when I need fast control over exposure settings. The bigger, more expensive models really don’t take better pictures than my much cheaper T2i.  They're just faster to navigate and provide professional-level control.  (As for lenses for my T2i, my 10-22mm wide goes with me for indoor shots, 50mm or 85mm for portraits, and the kit 18-55mm, 28-135mm, or 24-105mm for outdoor shots depending on how appropriate a big lens is for the situation.)

I’m primarily a Canon guy, so I really like the Canon T3i, T5i (adds touch screen), 60D (no touch, but adds more buttons for more control; no lens with this link).  All are well under $1000, and are excellent.  Full-frame bodies like the 6D or 5DmkIII are of course amazing, and they give better low-light sensitivity, a wider field of view, and of course much more control, but at much greater cost – $2000 or more, without a lens.  Unless you're shooting professionally it’s hard to justify the price.  The SL1 is also nice because of its tiny size (and it is tiny for an SLR), but it is otherwise essentially the same as the T5i without the flip-out screen at considerably greater expense.

Canon also makes a lower-end model called the T3, which takes good pictures, but difficult to recommend because you can get a lot more camera with a used T2i (sometimes for less), or the T3i for not much more money.  The LCD screen on the T3 is quite poor, and doesn't flip out like the T3i (for easier shooting above or below eye level).  The T2i/T3i is also faster, has a lot more resolution, higher quality video, and much better low-light sensitivity, among other enhancements that to me make it a better buy.  But if the T3 is what you can afford, you're still going to get great pictures.

Nikon also makes great cameras, but I don't follow their lineup closely enough to make specific recommendations.  The one thing to watch out for on Nikon cameras is that the less expensive bodies (< ~$700) don't have the mechanism to autofocus on “AF” series Nikon lenses, and those lenses happen to be the less expensive ones.  So plan on spending considerably more on lenses with Nikon than Canon if you buy a cheap body.  If you get a D90 or more expensive model, the AF lenses will autofocus and the less expensive lenses are fine.

I’d be a little careful about buying other DSLR brands, as the lenses made for those cameras have inconsistent quality and you have to be really careful about what you buy.  If you invest in Canon or Nikon equipment you can be assured that you're always getting something at least very good, if not excellent.  Neither brand makes bad stuff.

If you're just starting out and want to buy your first digital SLR, get the T3i or T5i.  Anything more complicated will be overwhelming because of its complexity, and won't give you better pictures.  The kit lenses included in the box have really good image quality these days, and will be sufficient for new photographers.  Once you begin to understand photography a little better you can step up to a better lens for more control over what you shoot, and you won't have to upgrade your camera.

With that said, everyone with an interest in photography and a digital SLR camera should own a 50mm prime lens.  Canon 50mm f/1.8, Nikon 50mm f/1.8 manual or auto focus (the first link will autofocus on the more expensive Nikon camera bodies, but not on base models).  They have excellent image quality and are very inexpensive. They give you the ability to shoot pictures with a soft, out-of-focus background that you can't get otherwise without spending a lot of money, and as such they make spectacular portrait lenses.  They also allow you to shoot indoors without a flash in moderate lighting.

In the end, though, if you already have a digital SLR and it doesn't have any glaringly horrible problems, you're fine sticking with it rather than upgrading.  Spend the money on a new lens instead.

Point and Shoot

The quality of point-and-shoot cameras is all over the map.  So it is pretty hard to make specific recommendations. 

For the most part you get what you pay for.  If your camera cost you $150 or less and you're thinking about upgrading, I'd just go ahead and do it.  A P&S camera that sells for $250 is always going to be a significant upgrade over anything ever sold for less than $150, and is probably worth the money.

Point-and-shoot cameras have also improved significantly over the years too.  A P&S camera from more than 5 years ago is really going to pale in comparison to something newer. 

So as a general guideline, I’d say that if your camera is more than 3 years old, or cost you less than $150, yeah, you should upgrade if you're considering it. 

What should I get?

Camera manufacturers release new models of their point-and-shoot lines quite often – it isn't unusual for a model to be discontinued and replaced after just 6 months.  So specific models are something that I don't even try to keep up on.  So I won't make specific recommendations.  They'd be out of date rather quickly anyway.

So instead I'll give you one piece of buying advice… ignore the numbers.  Ignore the resolution (megapixels), ISO sensitivity, etc. entirely.  Despite what the difference in numbers might tell you, performance of nearly all cameras in this category are all about the same, given similar lenses. 

The one biggest factor to look at is the size of the lens.  Specifically, the glass in the lens.  The bigger the lens, the more light it collects, which improves image quality.  A small difference in lens size can make a big difference in picture quality.  So I'd recommend buying the camera with the biggest glass within your budget.
The other thing to look at is the optical zoom capability.  Many times manufacturers will try to hide this and give you a digital zoom number.  Digital zoom is useless.  Only look at the optical zoom.  Buy whatever suits your needs.

The other thing I'll mention is Optical Image Stabilization technology.  This compensates for the shake that is inherent in cameras that are being held by hand.  It is especially important in point and shoot cameras because they are tiny (and therefore harder to hold steady) and don't handle low-light as well as SLRs, so they require longer exposures which increases the likelihood of motion blur.  IS technology is very highly recommended unless you shoot on a tripod or only take close pictures in daylight.

As for brands, Canon is the clear winner in this category.  They consistently produce the best images, and are generally quite easy to use, relatively speaking.

Smartphone cameras have gotten much better in the last few years, but they really still pale in comparison to point-and-shoot models.  Not only do P&S produce much better quality pictures, they also have a real zoom capability.  The only smartphone cameras that I've found that does what I would even consider a passable job are the Nokia Lumia 1020, 920, 928, and 925, or the HTC One.  Not even the iPhone 5 or any of the Samsung Galaxy S series are any good unless you're shooting in the noonday sun.

Other Camera Types

There are a few other types of cameras out there, such as mirrorless, and rangefinder, but getting into a discussion about those is well beyond the scope of this blog post.  I'd be happy to answer questions if you're considering one of these other types.

A Final Word about Megapixels

The more megapixels the better, right?  At least that’s what camera manufacturers and salespeople would like you to believe.  But that isn't necessarily the case, especially on small cameras like point and shoot and smartphones.

The trouble with increasing the number of pixels is that in order to add more pixels the pixels themselves have to become smaller.  And smaller pixels means that less light is captured.  Which then in turn creates noisier (less clear) images, and less ability to handle low-light situations like you would find indoors or at night.
Generally speaking, as long as a camera has 6-8 megapixels of resolution, it is sufficient.  In fact, the higher you go above that the more processing has to be done and blurrier your images become to remove the extra noise, especially when shot under conditions other than sunlight in the middle of the day.  An 8-megapixel point and shoot is generally going to be preferable to one with a 13-megapixel sensor, especially on small sensors like those in a cell phone.

Higher resolution pictures also take up more disk space.  Double the number of pixels, double the size of the file.

Always remember that the highest resolution “normal” computer monitors are about 2 megapixels at best.  And 3 megapixels is enough for printing an 8x10.  You only need higher than 3 if you are quite exuberant in your cropping of images (to simulate zoom after-the-fact, for example) or if you are printing at 11x14 or larger.  Any extra resolution is wasted, and taking up extra disk space.  So, with all other things (*cough* lenses *cough*) being equal, choose a camera with the resolution closest to the 6-8 MP range.  Even photography magazines, who are notoriously picky, only require about 5 MP for print.


Chances are if you already own a digital SLR it is probably fine.  But if you own a point-and-shoot which isn't brand new or didn't cost more than $250 you could benefit from an upgrade.

SLR cameras are more of a long-term investment while point-and-shoot cameras are meant to be more-or-less disposable.  And the lens on a camera makes more difference in picture quality than the camera itself.  And aside from the top-of-the-line models, for the most part you get what you pay for.  Keep those things in mind while shopping and it will be hard to go wrong.

Google Search