Adding a Polygon Search to CFMongoDB

Written by Sean Ryan on . Posted in App Dev

CFMongoDB rocks my socks…when I need to use ColdFusion.

I noticed that the CFMongoDB doesn’t natively support a polygon search ūüôĀ It’s ok though because I wrote one and as far as our team’s testing is concerned, it works.

I added a method to DBCollection.cfc directly under the find method called findWithin_Poly. It follows.

 

function findWithin_Poly(string loc, array poly){
  polyConverted = CreateObject("java","java.util.ArrayList").init(ArrayLen(poly));
  for ( i =1 ; i lte ArrayLen(poly) ; i++){
    polyConverted.add(poly[i]);
  }
  polyObject = variables.mongoConfig.getMongoFactory()
              .getObject("com.mongodb.BasicDBObject").init("$polygon", polyConverted);
  within = variables.mongoConfig.getMongoFactory()
              .getObject("com.mongodb.BasicDBObject").init("$within", polyObject);
  query = variables.mongoConfig.getMongoFactory()
              .getObject("com.mongodb.BasicDBObject").init(loc, within);
  var search_results = [];
  search_results = collection.find(query);
  return createObject("component", "SearchResult").init( search_results, structNew(), mongoUtil );
}

As you can see, the process is very straight forward. I build up a BasicDBObject according to the Mongo Java API and call a slightly different find method on the Mongo collection.

Parameters

The loc parameter is a string that represents the key of the document that holds the coordinates of the point. I used loc because that what I saw all over the documentation but it’s clear about not needing to be called loc.

The poly parameter is an array of arrays. Each element of the poly array is itself an array of exactly two elements: an x and a y. It represents a list of points which describe the boundaries of the polygon.

Format

The format of the poly parameter is important – as is the data in the collection. CF will automatically make your array elements strings, but they need to be doubles – java doubles.

To do this, cast them with javacast(). See the example below.

 

poly = ArrayNew(1);
polyA = ArrayNew(1);
polyA[1] = javacast("double",3);
polyA[2] = javacast("double",3);
polyB = ArrayNew(1);
polyB[1] = javacast("double",8);
polyB[2] = javacast("double",3);
polyC = ArrayNew(1);
polyC[1] = javacast("double",6);
polyC[2] = javacast("double",7);
ArrayAppend(poly,polyA);
ArrayAppend(poly,polyB);
ArrayAppend(poly,polyC);

You would then use this to call the  findWithin_Poly function.

hits = people.findWithin_Poly("loc",poly);

Now, I haven’t done this yet, but I would¬†recommend¬†doing it: add a utility method that would cast all of the elements to double. Simple and reduces clutter.

 

2 Important (Overlooked) Features of ColdFusion 9

Written by jbriccetti on . Posted in App Dev

Recently I was chatting with someone on our dev team and I realized that not everyone is as heads-down, buried in ColdFusion every day like I am. New features come and go (do they go?) and we latch on to what we can, when we need it. But depending on what you were doing back when CF9 released, you may have missed these. Yes.  these are old features already (4 years)  but real nice features just seem to slip through without being harnessed. these are (2) big deal items I see overlooked that you really should use:

local variables in functions

We all know about using the¬†var¬†keyword to declare local variables. if you don’t already know, look it up. In CF8, we had to do all these var declarations at the top of the function. Pain in the ass. someone kick the compiler guy/gal.

Leave it to our CF community to come up with a brilliant solution that is one of the more elegant “standards” I’ve seen evolve in the CF world (along with init() pseudo-constructors). All ya gotta do is add this code at the top of every function:


var local = {}; // structNew()? get hip, use the squirrelly brackets!

Then, in your function code when you need a local variable, just use local.

what sucked?¬†well, you have to use local. prefixes on all your local variables which is particularly lame with an lcvs (loop control variables), like i,j,k etc… sometimes you’d see folks declare these as local separately just so they could reference it using #i# or the like.

Overall, nice solution. but that was pre 2009; (eh hem, 4 YEARS AGO) – what’s the gig with CF9?

First of all,¬†in CF9 we can declare a var anywhere in the function. ’bout time. the compiler guy/gal got the message. but that’s only half the story.

Them Adobe folks also made an implicit local scope in all functions, called… (wait for it, wait for it… drumroll) local. Yea, they scarfed the “standard” and rolled it in. nice job Adobe (i don’t say those words too often these days, so I had to throw it in). Well, anyway, what are the perks to this new scope? (2) things:

  1. You don’t have to declare the local structure at the top of the function. it’s already there (but if you do, it’s backwards compatible, so no worries)
  2. You don’t have to reference the local variables using local. once they are initially set in the local. scope, you just reference them directly and they are found in the scope chain

thus:


for(local.i=1;i<10;i++){
writeOutput(i); // works like a charm
}

Implicit setters and setters

Here’s one i didn’t figure out until i had to teach a class in ColdFusion 9 – ColdFusion allows you to add implicit getters and setters with one easy attribute to the <cfcomponent> tag: accessors=”true”

of course Ben Nadel crawled behind the dashboard to check the wiring on this – pretty cool experiment he did too. At the end of the day it’s not a huge deal, but here’s whay I like it. If forces me to do things “the right way” and it makes it easy. That’s pretty much what CF is all about.

Mike Pacella, one of our CF/Java ninjas got me in the habit of using getters() and setters() a lot – it seems a bit uptight sometimes and quite frankly, you dont have to do it for¬†everything – but if your talking about instance properties on an object, well, yeah, you¬†probably should. but it’s a hassle to write getters and it¬†feels like a waste of time when those methods just get and set. I write enough code, i dont need to do more. shut up, I know i can use snippets or generators, or whatever. I do. it’s still a pain in the ass.

so here’s the layup – you add accessors=true to the <cfcomponent tage and all your properties have getters and setters available by default. AH BUT WAIT¬†how the hell does the compiler know what the run-time instance properties will be. this is ColdFusion, we don’t declare those things to our compiler. This is the other thing i love about this feature – we ¬†actually use <cfproperty and now, its a tag that’s actually used by the compiler ISO just being a documentation scrap. I’m a bit of a documentaiton nut, so I like the idea that if I create this tag and thereby write documentation for my doc engine parser, I’m also writing code that is doing something – in this case defining properties for the implicit getters and setters. putting this all togther, ¬†here’s a little cfc code you might see:


<cfcomponent accessors="true">
<cfproperty name="Globals" type="struct" hint="the global variables, by reference." />

<cffunction name=”init” access=”public” returntype=”obj”>
<cfargument name=”globals” required=”no” default=”#{}#” hint=”reference to global variables” />
<cfscript>
setGlobals(globals);
</cfscript>
<cfreturn this>
</cffunction>
</cfcomponent>

so that’s it. if you wanna mess around with it, here’s some code.

Unsetting Response Headers in an Apache Reverse Proxy Configuration When Serving PDF to Internet Explorer

Written by Sean Ryan on . Posted in App Dev

Yuck. IE.

Sometimes, we just need to suck it up and support IE even if it goes against everything we believe in. It’s well known that IE has a problem downloading PDFs over HTTPS when certain cache control headers are sent in the response.

See Microsoft’s own support site for this one:¬†http://support.microsoft.com/kb/812935

The workaround for a developer is to make those headers disappear. There are several ways you can do this.

  1. In your application, don’t set them
  2. In your Web server, unset them.

Usually, the better approach is to configure your Web server to unset the header site-wide since the origin of the PDF behind your server is inconsequential. Removing them at the server level makes serving PDFs to all browsers work.

Unset Headers in Standard Apache Configuration

If you are using Apache in a standard way to front your application or site you can identify PDF requests using the Files directive.

<Files ~ \.pdf$>
   Header unset Cache-Control
   Header unset Pragma
</Files>

You should place this in the <VirtualHost *:443> section since this is a problem related only to serving over HTTP.

Unset Headers in Reverse Proxy Configuration

If you are using Apache as a reverse proxy and your PDFs are not on the filesystem relative to the site root, then you need to match the PDFs differently. In this case, you need to use a Location directive since the Files directive is used to match unproxied files.

<Location ~ \.pdf$>
   Header unset Cache-Control
   Header unset Pragma
 </Location>

Again, this should be placed in the <VirtualHost *:443> section.

Convert MySQL Database Character Encoding to UTF8

Written by Sean Ryan on . Posted in App Dev

Create a Database

To create a database that will default to UTF8 without modifying the server defaults run the following command

CREATE DATABASE dbName DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci

Broken down, it means:

CREATE DATABASE dbName –¬†create a database named dbName (best name, ever)

DEFAULT CHARACTER SET utf8 –¬†this database will use utf8 as its default character encoding. All tables created, will be configured to use utf8 for character columns.

DEFAULT COLLATE utf8_general_ci –¬† when comparing characters, ignore case (ci=case insensitive)

Convert an Existing Database

If you already have a database in some other character set, such as latin1, or ISO 8859-1, you can easily convert it to UTF8 by walking through a few simple steps.

1. Export your database

$> mysqldump -u -root -p \
       --complete-insert \
       --extended-insert \
       --default-character-set=utf8 \
       --single-transaction \
       --add-drop-table \
       --skip-set-charset \
 > dump.sql

This will include column names for your insert statements, efficient insert statements that don’t repeat column names, export in utf, not include setting the existing charset, and include drop table statements prior to create table statements.

2. Edit the Dump to Use UTF-8

cat dump.sql | \
    sed '/DEFAULT CHARACTER SET latin1/DEFAULT CHARACTER SET utf8/' | \
    sed '/DEFAULT CHARSET=latin1/DEFAULT CHARSET=utf8/'
  > dump-utf8.sql

This will just swap out latin1 for utf8. You can accomplish however you want to. I’m a fan of sed and awk and all things command line but the goal is clear; make latin1 stuff, utf8 stuff. Just be careful to qualify your ¬†replacements. You don’t want to accidentally change data.

3. Drop Your Database

From the mysql prompt, run

> drop database dbName;

4. Import UTF8 Version

mysql -u root -p < dump-utf8.sql

 

That’s it. You could easily make a little shell script to do this and when need to convert a database from one encoding to another, just run it. Perhaps the command would look something like:

swap-encoding dbName fromEncoding toEncoding

Google Discontinues Free Version of Google Apps for Business

Written by Perry Woodin on . Posted in Software, Strategy

On Thursday (12/06/12), Google announced changes to their Google Apps for Business offering. They will no longer offer a free version of Google Apps for Business. As of 12/06/12, any company that wants to use Google Apps will need to sign up for the paid version which costs $50 per user, per year.

If you are an existing Google Apps customer who is taking advantage of the free version, nothing changes. You are essentially grandfathered in and will not be charged the $50 per user, per year fee. Google is offering existing users an upgrade path to Google Apps for Business at the rate of $5 per user, per year.

I should also note that Google Apps for Education is still available as a free service for schools and universities.

Troy Web is an official Google Apps Reseller http://goo.gl/VRk1y. If you need assistance with setting up Google Apps for Business, let us know. We can help with everything from simple account setup, to full-scale migration from your existing service provider.

How to Swap Out a Time Warner Modem With Your Own Faster One

Written by Sean Ryan on . Posted in App Dev

If you live in an area serviced by Time Warner Cable and you subscribe to high-speed Internet access, you’ve recently been mailed a notice with the infamous¬†announcement¬†of their plan to begin changing a “leasing” fee of $3.95/month for your cable modem. The mailing includes a way to avoid the fee by buying your own cable modem but with a caveat¬†that doing so may leave you in dust when they “update” their network and your modem no longer supports their new network technology.

What they fail to mention is that over time, their equipment fails to give you the best experience and fastest speeds they have to offer but that you need to figure this out yourself and swap out your own equipment. When was the last time you ran some speed tests and researched Time Warner’s current network offerings [in your area] so you could decide if it was time to swap out your¬†equipment¬†?

There are several points to consider when making this decision, so I’ll help lay it our for.

Cost

The cost of doing nothing is known outright: $3.95/month X 12 months = $47.40/year. Assuming they never ever increase that cost, you’re looking at about $50/year. I could shovel my driveway or I could have it plowed for about $50 so you really need to ask yourself, what it is that you value more. That being said, once I buy the modem, I never have to buy it again. Once I shovel my¬†driveway, I’ll need to shovel it again ūüôā So let’s look at this a little differently. I’ve had Time Warner for 13 years now. If I was paying [an avoidable fee] all along, I would have donated $616.20 to Time Warner Cable by now.

Time Warner supplies you with a list of approved modems (http://www.timewarnercable.com/en/residential-home/support/topics/internet/buy-your-modem.html) and the prices range from around $50 to a little over $100. However, I found the SB6121 for $53 online even though it retails at $109.  A little sweat equity goes a long way sometimes.

How “Tech-Savy” Are You?

The process of doing all this is remarkably simple. I have to hand it to Time Warner on this one. I went through the process and the hardest part of all was picking out the modem – and even that was pretty simple after I understood what I needed.

Once you’ve selected your modem (see below) all you have to do is plug it in and move the Ethernet cable from your old one to the new one. Then, call Time Warner (1-866-321-2225) and say you need to activate a new modem. Trust me, they’re expecting your call. They know exactly what you need. They’ll ask one important question: “What’s the MAC address of your modem?” The MAC address, or Media Access Control number, is the way Time Warner authorizes your device to be on their network. To get this number, turn your modem over. It’s plastered right next to the serial number, model number, and all the other FCC stuff. If you run into any problem, Time Warner can and will help you through it.

Bandwidth

Unless you subscribed to Time Warner high-speed Internet in the last few months, it will almost certainly increase your bandwidth and consequently, your download speed – by a lot. Time Warner is in the process of upgrading their network to use the DOCSIS 3.0 protocol and without going into any detail, DOCSIS 3.0 is faster than DOCSIS 2.0, which most Time Warner Cable modems are able to handle. This boils down to you getting a faster download speed simply by using a modem that can handle it; without paying more. Yes, you could bring your current modem back and get one that has the DOCSIS 3.0 firmware on it, but you’d still be paying $3.95/month for it.

There are a lot of variables that determine network speed, such as which day and what time of that day but with a DOCSIS 2.0 (most Time Warner modems) and without turbo should give you about 10-15 Mb/second download and very close to 1Mb/second upload. With turbo, you’re looking at around 2 Mb/second upload.

When I swapped out their 2.0 modem with my own 3.0 modem, I saw my download speed jump from 15Mb/second to 32Mb/second! This test has been repeated dozens of times with similar results. Never has my download speed dropped below 29.

Other Considerations

I don’t want to get political but it’s worth a mention that the new policy of leasing the modem has the potential to be discontinued down the road. They’re already being sued because of it. “Send customers confusing notice of the fee in a junk mail postcard they’ll throw in the garbage, sock them with a $500 million dollar a year rate hike, then announce on your website that customer satisfaction is your No. 1 priority. That’s some way to deliver satisfaction.” That was said by one lawyer initiating a class action law suite on behalf of a New York and New Jersey client. (http://www.pcmag.com/article2/0,2817,2412196,00.asp)

For me, I see this as a way to take a little more control over my Internet access, increase my own speed, and avoid a little monthly fee. It’s not much but then again, buying a modem and calling Time Warner really doesn’t take all that much time.

What Model?

If you haven’t done so, check out the ist of modem they allow and what features they offer. I don’t need wireless access from my modem but you might. DOCSIS 3.0 was important to me and it should be to you too. Multiple ports was not important to me because I have a switch that goes to two routers. Spend ~$60 now or spend $600 over the next 13 years. The selection process really isn’t all that complicated so long as you’re realistic about your own needs.

 

Scheduling EC2 Backups with Skeddly (automated EBS snapshots)

Written by Perry Woodin on . Posted in Deployment

A recent topic of conversation in the office has been backups. Three of us have experienced catastrophic hardware failures on our local development machines (i.e. our laptops). Thankfully, we are all obsessive about backups so we all got back up and running in no time. If you aren’t backing up your local system, then you need to read Sean’s excellent post on Selecting Cloud Backup Software.

But what about your servers? And more specifically, what if you’re running an Amazon EC2 instance?

Like all things AWS, Amazon has many options for creating backups. If you setup your EC2 instance to use Elastic Block Storage (EBS), you can simply create a snapshot of your volume from the AWS Console. These EBS snapshots are incremental backups that persists on Amazon’s S3. Incremental means that only the blocks that have changed since your last snapshot are saved. This is all really slick, but manually creating snapshots from the AWS Console isn’t a good solution if your goal is to have daily, hourly, or whatever snaphots.

You could create your own service using command line tools. For example:

ec2-create-snapshot vol-id --description "Daily Backup"

Or, you could use Amazon’s API. For example:

https://ec2.amazonaws.com/
?Action=CreateSnapshot
&VolumeId=volume-id
&AUTHPARAMS

The above options are really useful, but it takes a bit of time and fiddling to get everything right. The easiest solution I have found for scheduling automated snapshots is a service called Skeddly http://www.skeddly.com/.  Skeddly can do more than automate snapshots, but that’s what we’re going to look at in this post.

Using Skeddly

Sign Up and Create a Skeddly Specific AWS User

As of writing this post, you can get a 30-Day Trial of Skeddly, so go there now and sign up.

Before you sign up I would suggest creating an access key specifically for Skeddly. You do this by creating a user from the AWS Console under Identity and Access Manager (IAM). I created a user called skeddly with the following policy:

{
"Statement": [
{
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:CreateTags",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

If you need asstance creating a policy, you can use the AWS Policy Generator located at http://awspolicygen.s3.amazonaws.com/policygen.html

Add an Instance

After logging into Skeddly, select the Managed Instances tab and Add Instance. This is the easiest way to create your automated snapshots. You will need to know the following before you can fill out the form:

Instance ID: Get this from your AWS Console under EC2. You will see an instance column that displays your instance ids. Your instance id should look something like i-1a2b3c4d.

Elastic IP: If you want Skeddly to Stop/Start your instance to ensure the snapshot is complete you will need to supply the Elastic IP address associated with your instance. This is optional, but recommended.

Access Key: This is the IAM user I suggested creating above.

After supplying the necessary instance information you can jump down to Backup Instance and Delete Backups.

Create your schedule.

Note there are macros you can use to name your snapshot. I use something like

$(VOLUMENAME)$(DATE)

I keep a weeks worth of snapshots. And because my instance has three EBS volumes, I set the Minimum to Keep at 21. That’s 3 volumes x 7 days = 21.

Pricing

I think the pricing is incredibly reasonable. It costs $0.15 to create or delete a snapshot of each volume. That means I’m only spending $0.90/day to create three snapshots and delete three snapshots.

Scratching the Surface

Skeddly can do so much more. Once you get started, you may find yourself scheduling all sorts of tasks. Need to backup your RDS… Skeddly can do that. Create an AMI… let Skeddly handle it. Make a nice dinner… Skeddly can‚Äôt do that, but with all the time you‚Äôre saving why not put on your chef‚Äôs hat and prepare some grub.

Selecting Cloud Backup Software

Written by Sean Ryan on . Posted in App Dev

The need to backup data is nothing new but how we backup data does evolve and today we should all be at least exploring the cloud for a number of reasons; chief among them is that the cloud is offsite and maybe least among them is that all the cool kids are doing it. But, they are.

Recommendation

If you just want to know my¬†recommendation¬†for which cloud, it’s Amazon (http://aws.amazon.com/s3/). If you just want to know my recommendation for some client-side software, it’s CloudBerry (http://www.cloudberrylab.com/) for Windows and Arq¬†(http://www.haystacksoftware.com/) for Mac users – both by a long shot. Read on to find out why.

Selecting The Cloud

I looked at several clouds for use beyond just backups and some for only backups. I kept coming back to the Amazon cloud because of its maturity, its massive economies of scale, its security model, the APIs, and its insane durability of eleven 9s – as they call it. The eleven 9s is interesting. It means that if you put a file up on the Amazon cloud, there is a .000000001% chance that file will be lost forever. Or, a 99.999999999% it will not be. They even offer a less durability level of 99.99% for about 25% of the cost if you want to “roll the dice.” Considering that until now, I’ve backup all my data on an external 1T drive, I’d be hard-pressed to estimate my own durability level at 90%. For one, the drive is roughly three feet from my computers (not offsite), for another it’s all on one drive (hope it doesn’t fail), ¬†and not exactly the most physically secured device I have.

Let’s face it, if you can get over your trust issues with letting someone else watch over your [unreadable] data, then cloud storage hardly needs an argument. This article is about selecting how to PUT and GET your backup data and not how to choose a cloud so let’s move on…

Interacting with the Cloud

Depending on your nerd level, there may be several ways to interact with the cloud when using it for a backup store. With Amazon, you can build your own urls and use the RESTful services they provide – nerd level, awesome. Or, like a mortal, you can and should use a backup utility like Arq or CloudBerry that does the heavy lifting for you. There are many client utilities to choose from so I took a look at eleven of them and settled on two that stood far above the rest in almost every category of concern to me.

How To Select a Utility

There are plenty of aspects to look at when selecting a utility for your own backups but in my case, I narrowed it down to the ones that meant the most to me and sort of ordered them in my head and knew I would have to give some features up no matter what I found.

What’s the Cost?

Cost is obviously important. Almost all utilities have an upfront cost and most have transfer and storage fees. Some have monthly fees but they can usually be discounted if you sign up for 6 months, 1 year, or more at a time. Both Arq for Mac and CloudBerry for Windows have a one time cost of $29 for the software and you pay your cloud provider transfer and storage fees. $29 is pretty low when compared to the other nine popular solutions on my list.

Which Cloud?

Does the software support multiple clouds? This may not matter to you. Since I knew I wanted to use Amazon, I was only concerned with one cloud. However, not all utilities can use Amazon. For example, Carbonite uses their own cloud and there are no other options. Arq uses Amazon and CloudBerry support many clouds.

Does it deduplicate?

This is a very important feature and I refused to use a utility that was not capable of de-duplication. Simply, it means that if two files are found to have the exact same parts Рsuch as an Excel chart embedded into multiple Microsoft Office Documents Рthen the duplicated part of those files is only backed up once. During a restore, the documents that shared the same content are restored properly as expected. When paying for transfer and storage, this reduction in backup sizes becomes a tremendous cost saver as your backup data grows. Both Arq and CloudBerry support this.

Does it Encrypt?

Even if the cloud provider encrypts your backup data, you may still want to encrypt the data yourself and¬†prior¬†to transfer. For example, Dropbox stores and encrypts your files on their own cloud but only after it gets to the cloud. If your data is¬†interrupted¬†on its way to the cloud, you’re information is exposed; even when using SSL. Moreover, Dropbox can technically un-encrypt your data – though they say the never do. If you encrypt your data prior to transmission, only you can decrypt it. In fact, if Dropbox un-encrypts your data (because remember, they always encrypt whatever you send to the cloud), all they’ll see is your encrypted version. This is exactly how 1Password syncing over Dropbox works. Arq and CloudBerry both offer encryption. CloudBerry actually offers a very wide range of options on this front.

How Much Space Do I Get and do I have and Control?

Unlimited space is what I needed but you may not need so much. This may depend on the tier you select with a provider. Carbonite, for example has an unlimited option for personal use but very limited to less limited and very expensive options for businesses. All tiers with Carbonite come with monthly or yearly fees that may be worth paying since there are no transfer or storage fees. If you end up with unlimited storage, you may want impose your own logical limits on the size of your storage to keep other costs down, such as long-term storage. Arq and CloudBerry both offer access to clouds with unlimited storage and both offer a way to keep your size within a designated budget. Very useful.

Does it Support Versioning?

Version is pretty cool. It’s a way to keep multiple copies of your files in case you need to restore a file before you overwrote changes and backup them up. Let’s say I backup some Apache configuration files and then make changes to them over the next several hours and by the end of the day, I realize I’ve messed up my http.conf file so badly, I need to start over but I never created a http.conf.orig file like a god little tinkerer. You can use your backup utility to find the original version even though you backed up up seven different version over the last seven hours (assuming you have your utility set to run once/hour). You’ll want to set a max version if possible, otherwise, your backups will grow very large in very little time. I keep mine for 30 days. Both Arq and CloudBerry support versioning from the cloud provider.

Can it be set to Run Automatically?

If you have an awesome backup strategy but forget to run it, what’s the point? I’m not a system administrator; I’m a software developer with no time to manage a system. ¬†I rely heavily on automated tasks and calendar reminders. It was important to me to select a backup utility that could be set to run in the most flexible ways for me. ¬†I have my utility set to run¬†incremental backups once each hour on the hour. This recently proved invaluable to me and indirectly, to my company when I spilled an entire cup of coffee into my laptop. I was mad but lost zero bits of data because minutes before I spilled my coffee, the¬†automated¬†backup ran – and completed. Even if it had not run, the most I could ever be out is one hour’s work. Eating one billable hour isn’t as bad as eating 80 hours (10 days, the default for Time Machine). Both Arq and CloudBerry offer this functionality.

Can I Control its Network Usage?

You need to get your backup data to the cloud somehow and although some cloud providers offer a way for you to physically mail your hard drive to them, this isn’t a solution here ūüôā You’ll be transferring your files over the Internet and if you want to do this while you’re using the network, it would be very nice to have ¬†way to control the network resources that your¬†utility¬†consumes. This is¬†analogous¬†to lowering a thread’s priority in a multi-threaded applications. If you plan to run your backups once per day at 2am while you’re sleeping, then this may not matter to you. Both Arq and CloudBerry offer network throttling.

Conclusion

For me, the choices were clear. For Windows user’s CloudBerry is a phenomenal solution. Not only does it offer everything I needed, it also offered everything I wanted. Like car insurance, I’ll be re-evaluating¬†providers again down the road, but the next time, I’ll be using CloudBerry’s many features as a benchmark when comparing other utilities. My Mac has been running Arq for several weeks now without a problem and I’ve made several individual file restores without any complications.

Cloud backups are the way to go, but be sure to chose an awesome utility to make your life easier.

 

Make Eclipse Editor Work with Retina Displays

Written by Sean Ryan on . Posted in Software

I recently had to buy a new computer – a MacBook – because I spilled a full cup of coffee all over the keyboard of my 17″ MacBook. ¬†I was a little upset. Apple stopped making 17″ laptops so to make up the for the lack of screen real estate, I decided to go all out with a shiny new retina display. I admit, I’m more impressed than I thought I would be but the technology comes at a cost.

Not all applications look nice with the display. With retina, applications need to display different icons and fonts that are made specifically for a much higher pixel density. Any web developer already deals with this when targeting iPhones or iPads, but since very few non-mobile devices have such displays, not all desktop applications have the necessary files yet.

Eclipse is such a popular IDE that I was¬†devastated¬†to see it doesn’t support retina yet. I use the IDE all day long so I scoured the Internet tubes for a solution and found one that is good enough. It only updates the fonts for the editor part and not any of the graphics – obviously – but that’s good enough for now. Here’s what you need to do…

  1. Right click and ¬†“Show package contents” on the Eclipse.app. (STS.app, if using Spring Source Toolkit)
  2. Edit Contents/Info.plist. Just above the closing </dict> </plist>, add <key>NSHighResolutionCapable</key> <true/>
  3. Make a copy of the app (Eclipse copy.app)
  4. Open the copy
  5. Close the copy
  6. Rename the original (Eclipse-NON-RETINA.app)
  7. Rename the copy (Eclipse.app)
  8. Open
  9. Enjoy

Singletons Business Objects in CFWheels

Written by jbriccetti on . Posted in App Dev

Like many application development frameworks, there is lots of plumbing within that you don’t have to use, but hey, it makes a ton of sense. Singleton object management is a good use case.

Now let’s be clear, I could add something like ColdSpring to offload my object factory pattern to a separate¬† framework – that often makes sense, but depending on the client and how encapsulated you want to keep things, sometimes just a good clean way to do it on your own makes the most sense.

So we want to make instances of a component and keep these instances around beyond a request. In CFWheels, what’s the best way?

As it turns out, the answer is mind-bogglingly simple – but we have to consider (2) things:

  1. Where do we create the object instances?
  2. How will we be able to reference them?

In straight-up ColdFusion (no framework), we have Application events like onRequestStart, onSessionStart, OnApplicationStart & onServerStart that answer question#1. As for how to reference them, well, that’s easy too – we can just use the “scope” of choice for the persistence – “request, session,application or server” and your done. Of course you may need a trigger to “refresh” the cached instances (now you’re adding some plumbing). Here’s the “old school” way of handling this (or at least the non-framework way)

<cffunction name="onApplicationStart" returntype="boolean" output="false">
	<cfset buildCache() />
	<cfreturn true />
</cffunction>

<cffunction name="onRequestStart" returntype="boolean" output="false">
	<cfif structkeyExists(url,"reload")>
		<cfset onApplicationStart() />
	</cfif>
	<cfreturn true />
</cffunction>

<cffunction name="buildcache" hint="I'll build the app cache of singletons">
	<cflock scope="application" type="exclusive" timeout="10">
		<cfset util = createObject("component","lib.util.widget").init() />
	</cflock>
</cffunction>

But with CFWheels, we really should leverage the framework and not worry about micro-managing these events (although we certainly can if we want to get that granular) – so check it out. Answer #1: config/settings.cfm Answer#2: use the CFWheels set() function:

<cfscript>
	set(util= createobject("component","lib.util.widget").init());
</cfscript>

The Beauty of this solution is that settings.cfm only fires when the application loads (or reloads) – essentially (but not identical to) the onApplicationStart event. Secondly, to provide visibility of our variable anywhere in the application, using the set() function allows us to access the variable by using the get() function

<cfscript>
	util = get("util");
</cfscript>

please note: This solution is specifically for application singletons – behind the scenes, CFWheels is using the application scope for these settings – in fact, the documentation on config/settings.cfm explicitly states this file is for configuration – which is in fact a good use case for application level persistence, but not the only use case. Singleton (Business) Objects are also perfectly viable candidates for this same sort of persistence.