Evernote is one of those applications that have subtly changed my life forever. I've become much better at capturing idea and thoughts since grabbing it a couple months ago. It's just so much easier to create a new item than a whole text file you have to name and find a place for. It's fast, searchable, and I have a lot more to say about it, but I'll wait for future blog posts.

One of Evernote's stellar features is the ability to upload photographs, and Evernote will run them through an OCR and will include the images in its search. Through this, I found out that there are hidden words everywhere. When doing a search for "IRA", I found the hidden acronym on my face, my hair and another on my sweater. Oh. My. God. What else is written on my face that I don't know about?


There's a nice little firefox plugin called Stealthier, which allows for secretive browsing of the internet. While it's probably most widely used for untraceable porn siestas, it can also allow you to quickly switch off your cookies, cache and sessions, making you instantly anonymous on any site. In particular, I use it to view my Drupal sites from an anonymous perspective without logging out or opening another browser (that likely doesn't have Firebug). Then, I can just turn off Stealthier and I'm back. Kind of like the spy in Team Fortress 2.

In the winter, it's actually kind of a boon to have a keyboard that doubles as a radiant heater, especially if you poor circulation in your fingers (/me points to myself). However, as a short spring dissolves into oppressive Idaho heat, the heat might make one a little wary of using the onboard keyboard for fear of the dripping sweat shorting out some of the circuitry concealed below.

Here's the solution: a nice free app called smcFanControl. It allows you to change the speed of your macbook fans up to the healthy limit. As an illustration, I just flipped it on and in the last minute the temp showing on the tasteful menu bar display has dropped 10 degrees.

I appreciate the default macbook settings, which can be really nice when recording music. In contrast, my Dell laptop fan runs all the time, and it's a challenge operating a software-based recording app from across the room. But, I also like having a choice.

As a bonus for those of you who are putting off flipping on the AC for as long as possible, also consider using a Lapinater plus with the Mousitizer extension. Keeps an inch and a half of insulation between my legs and the bottom of the mac, which is considerably warmer than the top. For couch-based telecommuters, it also adds a new range of positions to choose from (diagrams to come).

During a recent push to a production server, I recorded a Selenium test to illustrate one of its uses, which is porting Drupal configuration changes from one server to another reliably. In this example, I use a Selenum test suite to accomplish several tasks, including:

  • Import a content type
  • Import a view
  • Install modules
  • Create a multi-paned panel page
  • Fill out several configuration forms.

This series of tests saved me a lot of time and reduced the probability of errors to virtually nothing. In the past, I would keep a list of configuration changes to port, and do them manually to push from a development server to staging, and then from staging to production. It's time-consuming and error-prone that way.

This video just shows the tests running with me narrating (rather poorly). I just wanted to actually show what was possible to whet the appetite of folks who haven't used Selenium before. I hope to add more tutorial-style videos later.

This video has been removed, but I will post a clearer one at some point. Thanks!

The other day I decided to finally figure out how to do a little benchmarking on some of the code we've been working on, and on my co-worker Jason's lead, I started fumbling around for a way to use kCacheGrind, which allows you to visualize where the bottlenecks are in your code, showing you information like how much time was spent on each function, in milliseconds or percentages. Very cool, but it took a while to get the setup just right. So, this is a cheat sheet to capture my research.

First of all, here's the aspects of my setup that might differ from yours:

  • Mac OSX
  • I'm using VMWare Fusion for hosting a Linux virtual machine
  • xDebug was already installed and enabled on my local machine

In a nutshell, here's how it works:

  • Using a firefox plugin and the php xDebug extension, you turn on xDebug profiler and load a page
  • xDebug creates 'cachegrind' files, which contains information about each php function that ran during that page load.
  • Open the cachegrind files with kCacheGrind to visualize the data.

You could do this all in Linux, but I end up creating the files on my mac and then drag them over to an Ubuntu virtual machine.

And here's the steps:

1. Download the firefox xDebug plugin

2. Set up xDebug profiler configuration in php.ini.

A cachegrind file can be created for each process, and Apache typically uses several processes for each page load. After some experimentation, I realized I wanted all of these merged together in a single file, which is what the settings below will do. Technically, this is 'append' mode, and if you refresh the page again, it will continue to append information. So, I will load a page, then I'll change the filename slightly (added a description at the end, for example), so the next page load will create a new file. Below are the php.ini settings I use for xDebug (set up to use remote debugging) and the xDebug profiler. I've bolded the lines you need to change to reflect your setup:

; xDebug settings
xdebug.remote_port=9000 ; Choose a port

; profiler settings
xdebug.profiler_output_name = cachegrind.out.%s

3. Make sure cachegrind files are getting saved

Turn on the profiler in Firefox by clicking the little (p) button in the lower right corner of the window. Then, load a page that's hosted on your local machine (I haven't checked how remote testing works). A file should be generated in the directory you specified for xdebug.profiler_output_dir.

4. Get an Ubuntu virtual machine

If you don't have an Ubuntu machine and have VMWare fusion, download a torrent for a full Ubuntu setup. If you haven't used torrents yet, Transmission is a good client for mac, and Azareus is the only one I have experience with on Windows.

5. Install kCacheGrind

Use the Add Applications menu item in Ubuntu and search for kCacheGrind. The install might take a while because there's a lot of dependecies.

6. Test out kCacheGrind

Once installed, open up a cachegrind file in kCacheGrind. You should see a lot of pretty colors and be a little confused. Perfect, that means it's working.

I haven't spent a lot of time figuring out the different levels of information in kCachegrind, but just the basic information is useful. It's easy to see which functions eat up the most processing power.

A couple other thoughts once you get to this point:

  • Each page load might be quirky for one reason or another, so it might help to balance out between several loads. Since we've set up the profiler in append mode, just refresh the page a particular number of times. Just remember how many time so that when you benchmark changes you can make an accurate comparison of the changes.
  • In addition to seeing the amount of time spend on functions, you can see how many times it was called, and by what. I noticed a function right away that was being run several times when it only needed to be run once. It's because I assumed that template.php was only included once. I realized this is a good way to test out some of your assumptions.


Below are some links I found particularly useful for information: - Info for setting up php.ini Mac callgrind reader (for use in a pinch, it was hard to find useful information in it, though) - Info about what the xDebug configuration lines actually mean - Variables that can be used for naming the cachegrind output files

- Credit to other co-worker Caleb for "Promoting me to get off my ass and write a blog post"

Just wanted to jot a few thoughts about this while it's on the mind.

I've worked with clients in the past who are content with a product that is okay, but not fantastic, because a lower cost = lower customer expectations = little or no backlash for bugs. I've tried this myself with designing several throw-away apps in an effort to generate a little buzz. What happened was kind of unexpected. Some people actually used the apps, and some people really liked them.

I realize these folks are transitional. They finally figured out how to articulate the problem of what they need, and are leapfrogging applications until they find one that works for them. As soon as they find one that works, they'll really dial back the effort, but might spend a little time trying to find something better when they experience bugs or garden walls. I followed the same process this morning to find a little countdown timer app. I found something that worked, and now have better things to do for a while before I go trying to find something better.

There's actually a market there. The equation is to spend less time and money on a product, charge less and your users will expect less, and will likely move on to something better eventually without bugging you much in the process.

Is this ethical? You're putting a sub-par product out there knowingly and expect people to use it little if at all. Doesn't this just add noise to the cacophony of options out there for just about anything you want to do? Add to the equation a huge marketing push, and you've got a lame duck you're nearly *pushing* on unsuspecting users. Phrased that way, it seems morally ambiguous at best. And once people have their band-aid, they will be less likely to find the better solution (and pay for it, supporting the producers of a fine product).

But, I can also see an argument from the other side. By providing a solution (any solution) to a problem, you're fulfilling your part of the bargain. It is, after all, a solution. Maybe people only use it in transition, but perhaps it's better to have at least some solution in the interim period. Maybe there's a role there, and an important one.

Jury's out on this one for me. I'd likely favor one of either side in different situations.

After some searching and trying a few apps that didn't work, I found Menubar Countdown. It sits in the mac menu bar and counts down the time you have left. The programmer also wrote a little blurb about why he created the software, which was particularly to practice using the Promodoro technique (25-minute bursts of activity). I'll have to read more about that, but in the meantime, thank Kris!

Normally, the prospect of spending the last few minutes of my evening writing a blog entry would seem a bit masochistic. The typical process:

  1. Visit my online blog
  2. Maybe sign in (after navigating to the login screen)
  3. Find the menu item I need to add a blog entry
  4. Open up some external wysiwyg (probably Dreamweaver) to compose so I can auto-indent html and save as I go
  5. Compose, copy and paste over to the add entry screen
  6. Submit, discover I used the wrong input type (should have been Full HTML, dangit!), change and resubmit
  7. Invariably decide to reword something, edit and resubmit

Besides the actual writing, that process is costly in terms of brain power and time. It kind of makes me sick to think about at such a late hour.

So, I'm probably the last mac-using blog-writer in the world to hear about Ecto, but on the off chance that I'm not, I'm using to it to post this to my blog, which - according to the Cult of Done Manifesto, item #12, is pretty much accomplishing my goal of telling everyone I know about it by just getting it on the web (thanks, Manifesto, for justifying my lazy ways!).

Ecto transforms the heavy process above to the following:

  1. CMD+Tab to Ecto (because it's always running on my mac)
  2. Click CTL+N to start a new entry
  3. Write it and click the Publish button
  4. Invariably change some wording and click the Publish button.
  5. Optionally do a little dance at how friggin easy this is

Maybe - just maybe - it takes a minute of time outside of actual writing, and a pretty brainless minute at that. Just click a couple pretty buttons. Heck I can do that in my sleep! (maybe I am doing it in my sleep...)

Getting Ecto set up with Drupal takes a little work initially, but once it's set, it's absolutely beautiful. Tutorials will likely come later, but I want to entice the reader with several more sexy bullet points to get you interested:

  • I use it to log ideas for blog posts. I just create a new entry with a couple word abstract, and then some late evening like this I'll flesh it out and post it.
  • You can filter out entries by word or tag.
  • It integrates with Drupal TAXONOMY! How bad ass is that?
  • It registers your input filters, and you can set a default (a beautiful method for getting around Drupal's single default input filter)
  • You can switch between wysiwyg and html
  • Add your own buttons to wrap particular code around
  • You can even edit other content types, not just blog entries (ideal for quick grammar corrections)
  • Spell check as you go (unlike Dreamweaver)
  • Image integration! Flickr integration! Twitter integration!
  • Schedule blog posts without extra Drupal modules
  • Manage multiple blogs in one place!

There are a few gotchas and quirks, but they're not bad considering the boons. This will be my sixth post from Ecto, so I may bump into a few more as I go, but so far, this is absolutely fantastic.

Ecto rocks my world!


  • Input format is hidden in HTML view, in the lower right hand corner. I attribute divine guidance in finding it
  • There's no 'clear formatting' button. Almost makes me miss TinyMCE
  • Oooh, icky <font style..> cruft when you try to stack formatting options


Just discovered a novel use for Ecto and Drupal. In Drupal, if you decide to change the length of your teasers, you have to re-submit all your nodes to get the new length. With Ecto, you just select all your posts and click the 'publish' button, and *poof*, done.

Today I'm reviewing several screen-sharing apps to conduct some remote usability testing. These particular tests have some interesting constraints:

  • My subjects are parent-volunteered kids, 8-16 years old and all in the same family
  • I'm on a mac, they're on PCs
  • Tests should be about 45 minutes long
  • If possible, I should be able to take over the screen to type in URLs

I tested on the following hardware

  • Presenter: Windows XP with 1920 x 1200 resolution
  • Client: Macbook pro running Leopard


Pretty good, but not quite good enough for usability testing.

  • Free
  • Marginal refresh rate (I foresee missing some important clicks and mouse wandering)
  • Requires registration of host and client for optimal use
  • Very easy transfer of control to attendee
  • Nicely integrated chat notifications (for passing URLs, for example)


Pretty much pure awesomeness after looking at all the other apps.

  • Not so free (free for evaluation, $39/mo or around $650 for full license)
  • Awesomely spectacular refresh rate
  • Transfer of screen control is more difficult than with Yuuguu, but not bad
  • Presenter can run a small executable without having to install anything
  • No account is needed on either side, however the presenter has to type in a 9-digit code plus 4-digit password to connect.


Very simple interface, but didn't quite cut the mustard. The Mac version has a permanently disabled 'preferences' menu item, indicating it's probably in beta whether they say so or not.

  • Free
  • Uber-simple interface
  • Extremely crappy refreshing, probably totally unusable for a usability test
  • Registration needed by host, but not client


Not too bad, except for that refresh rate. Darn you, TeamViewer, you've spoiled me!

  • Free
  • Surprisingly works for a mac
  • No installation required
  • No accounts required, just pass a (long) code (and don't forget to type the dash)
  • Fairly dismal refresh rate
  • Rather obscure interface (screen sharing button is a black box in a gray box in a another gray box)

Bosco's Screen Share

I *maybe* should have been wary of the playful doggy-centric identity, but I gave it a go anyway. Apparently you have to configure your router and firewall to get it to work. I don't think I'm going to start a usability test with a router configuration. No sir.


Pretty cool stuff, especially the 3D conference room motif </toungeincheek>.

  • Free for 1-to-1 meetings
  • Web-based, you just need to pass a code. You might need to install some Java though, which could really hang things up in a usability test.
  • Absolutely the worst refresh rate of all the apps
  • Setting permissions for different attendees is pretty nifty
  • Lots of features, like voice and webcam sharing
  • Did I mention the horrible refresh rate?


I guess we'll be using TeamAssist tomorrow on an evaluation basis. The other apps just don't compare in terms of refresh rate.

Sources - Good leads on several apps - Good list, not entirely accurate

Syndicate content