Author

Dennis Rongo

Browsing

There’s tons of mysteries in life and I have wondered my entire life, what our purpose in life is?

Lately, I’ve been examining my own life and trying to see things in a different way. I’ve been fortunate enough to be able to live comfortably, have food and shelter. The more I think about what I have, the more I realize how grateful I am to have things that I want. I’m also starting to realize that life doesn’t just revolve around my needs.

I started thinking about how I can affect people either positively or negatively. It brings me back and reminded me of the book that I’ve read a while back called, "the 5 people you meet in heaven". It’s a life changing book and if you haven’t read it, it’s definitely worth the time. The book is about a tragic and how Eddie (the main character) goes in the afterlife and meets 5 people that he have met or were somehow connected to him in his lifetime. Every single individual that he has affected in his life shares a story with him. With each story, he realizes what the purpose of his life was.

Regardless if afterlife is real or not, I think that there’s some key takeaways and lessons from the book. Most of us go through life fulfilling our personal needs, establishing our career, to have family and be successful. While they’re all great, and not to say that having a set of achivements is bad either. At the end of it all, once you have achieved your goals in life, is that it? Is that what life is about? Is that all that matters?

Ultimately, the things that I do in every waking life — how I affect others through my actions, not only serving my own needs but others’ needs, sharing with others and making an impact in other people’s lives is what gives me the greatest gratification. The more I try to understand and educate myself about life (spiritually), I feel that I’m getting closer to finding out my purpose in life.

My take is, each of us has been blessed with talents and it is our responsibility is to find what that is and share those talents with others. I’m blessed to have my problem solving ability, easily understand technology and programming concepts. My goal from here on out is to "give" as much as I do "receive". Giving back even just by being a mentor to young people, teach and to inspire others, share insights gives me satisfaction. Being able to affect others in terms of helping them get past a psychological barrier or simply providing them a direction on their career is well enough (in my opinion) to provide value to others.

This is a review for Getting Started with Twitter Flight by Tom Hamshere by Packt Publishing. I’ve come across Twitter Flight last year in my newsfeed and heard of it in one of the conferences that I’ve gone to. The framework seems promising but I was already sold on the idea of Backbone.js back then. This is my first actual attempt to dive deeper into what Flight is about. I like that the book is only 130 pages which makes for a good introduction. Each chapter is also brief and roughly about 5 pages each.

Requirements

The book assumes that you have a decent knowledge in JavaScript, jQuery and maybe even Require.js since it’s used in the code examples throughout. Since the idea behind Flight is being a component or module based, Require.js is a perfect fit but by no means required.

Flight according to their website is:

Flight is a lightweight, component-based JavaScript framework that maps behavior to DOM nodes. Twitter uses it for their web applications.

The author used Require.js on all of his code samples but the only requirements are ES5-shim and jQuery.

Show me some code

Let’s say you have this piece of mark up.

<form id="form">
    <input type="text" id="name" name="name" />
    <input type="submit" id="save" name="save" value=Save />
</form>

To create a component and set some default attributes and events. I elected not to use Require.js for the sake of simplicity. In the component below, the this.defaultAttrs allows a component to store values which in this case, I’m storing the id of an element and some random text someText.

var aSimpleComponent = flight.component(function () {
    this.defaultAttrs({
        someText: 'Hello',
        nameElement: '#name'
    });

    this.onSubmit = function (e) {
        e.preventDefault();
        alert(this.attr.someText + ' ' + this.select('nameElement').val());
    };

    // create a hook after the component has been initialized
    this.after('initialize', function () {
        this.on('submit', this.onSubmit);
    });
});

We can then use the component and attach to an existing DOM element.

// Attaching the component to the DOM
aSimpleComponent.attachTo('#container');

Chapters 1-4

The author starts out with an introduction on Flight and how it differentiates with the likes of Angular.js, Backbone.js, and Ember.js. Having no experience with Flight, I thought that this was a perfect book to slowly introduce me to the framework. The book slowly dives into the concepts and it wasn’t until half way through it that I was able to see the "big picture" of building a simple app. These earlier chapters sells the idea of why you should Flight and the reasoning on why Twitter built it.

It then goes on to installation using Bower and Yeoman to scaffold a new Flight application in chapter 4. Installation using the command line is a quick way to pull down the dependencies and get started but isn’t necessary at all.

Chapters 5-7

These chapters begins by discussing what are components (which is the basic premise behind Flight). It then defines the 2 types of components which are UI and Data. They’re both similar except for their main conceptual responsibilities. A Data component is for data processing and to perform data requests, while a UI component is attached to the DOM and provides the interface, handles user interactions, and event handlers.

Chapters 9-10

I particularly like the chapter 8: event and naming where the author illustrated some fundamentals that covers conventions that anyone that works with JavaScript can benefit from. Chapter 9 covers mixins which allows multiple components to share functionalities. Mixins are basically functions that other "components" can utilize and share so you don’t have to keep rewriting the same piece of functionality over and over. This allows for code re-use.

Chapter 10 introduces templating and in the examples, Hogan.js was used to incorporate templates in the components. A component can create be template based or can be attached to an existing UI element.

Hogan.js is another open source project that was created by Twitter which is a reimplementation of Mustache that allows templates to be precompiled on the server side. The advantage of this is skip the compilation process when rendering which is an expensive process. Hogan like other templating libraries is pretty simple to use and usually involes compiling a template into a function which you can then inject data into.

Chapters 11-13

In a nutshell, these chapters covers performance, testing and general architecture when building your Flight application. There’s things in this chapter that are beneficial in general that anyone can benefit from (not just building Flight apps).

The main ideas here is to allow individual components to be testable and avoid components from instantiating other components to keep them decoupled. As with any framework, it is always a good idea to think about the individual components upfront including the architecture and how the different pieces ties in together.

Code inconsistencies

I found some mismatch or inconsistencies between the code and the author’s instruction. One example for instance is in chapter 10, in the section generating template objects from DOM. I’m not sure if the author changed the code later on, but there’s a few instances of this error.

To achieve this, the table row needs to be hidden by default, so it doesn’t show on first load.

<ul class="js-task-list">
    <li class="js-task-item hide"></li>
</ul>

For reference, I have included the table of contents below.

Table of Contents

  • Chapter 01: What is Flight?
  • Chapter 02: Advantages of Flight
  • Chapter 03: Flight in the Wild
  • Chapter 04: Building a Flight Application
  • Chapter 05: Components
  • Chapter 06: UI Components
  • Chapter 07: Data Components
  • Chapter 08: Event Naming
  • Chapter 09: Mixins
  • Chapter 10: Templating and Event Delegation
  • Chapter 11: Web Applications Performance
  • Chapter 12: Testing
  • Chapter 13: Complexities of Flight Architecture
  • Appendix: Flight API Reference

Conclusion

The book includes a Flight API Reference in the appendix section and provides boiler plate codes to easily get started either building a component, mixins, using advice, and event listeners.

I would recommend this to anyone who is interested in Flight and have a decent JavaScript experience on them. The book is short and keeps the material on point within each chapter. It’s a light reading so anyone can finish it in a day or two but in order for you understand the concepts fully, the best way is to try it out and type out the code. Also, check out other components that users has created that you can use for your Flight application.

I think once in a while, it’s great to keep an open mind and see how other frameworks operate, and perhaps learn a thing or two about their architecture. Flight is not as popular as the more heavyweight frameworks counterparts but I can see how this can be useful to any project that adheres to a component based architecture approach.

Flight is a promising framework which is a good alternative to Backbone.js using a different approach, being a component based. It’s hard to compare the two since Flight is very lightweight and tries to keep it simple and doesn’t have Models or Routes or other things that Backbone.js has. This would be for another post so I’ll end it here.

I thought about switching my blog from Octopress to Ghost. I like having control
over the content that I’m publishing since they’re just literally markdown files (*.md)
and has no database backend. What I dislike about Octopress is the publishing aspect where I
have to issue a rake deploy every time I publish content. I was actually able to manage and minimize the multi-steps into running one small script. But I still have to do one step which is unacceptable.

The deployment should be painless and automated, period.. The directory should be able to automatically detect/pick up any file changes (new or update post)
and issue a generate command to convert the markdown files into HTML. Since the files
are locally created (synced to Dropbox), I need a way to automate it. My solution is to let the Windows Task Scheduler manage and run it every 24 hours. I wrote a console app and
the code looks like something below.

The dir is basically pointing to the root of the Octopress blog that I want published. Having this in the app.config gives me the flexibility to switch directory if my Dropbox is in a different location. The /C tells the command prompt to terminate after execution, and the push.sh is a shell script that I wrote a while back that generates HTML files, commit to local Git and deploy to Heroku server all in one shot.

var process = new ProcessStartInfo
    {
        WorkingDirectory = ConfigurationManager.AppSettings["dir"],
        FileName = "cmd.exe",
        Arguments = @"/C push.sh",
        CreateNoWindow = false
    };

    var proceed = Process.Start(process);
    proceed.CloseMainWindow();
    proceed.Close();

    Environment.Exit(0);
}

If you’re curious, the push.sh file looks like:

#!/bin/sh
# push.sh : publish and commit with a single command
rake generate && rake deploy
git add .
git commit -am "'date'" && git push heroku master

The script above is a bash script (*.sh) so if you’re running it in a Windows environment, I found that installing Git and using Git Bash is the easiest way to run bash scripts.

Installing Ghost on Azure is simple and can be done by creating a new website > gallery. As of the current writing, Azure installs the 0.3.3 version and the latest Ghost that’s out is 0.4. You can go down that route or take the manual route which I will cover in this post. I also tried setting it up from the Azure gallery but it was too easy for my taste, and besides, I like having my files backed up in Dropbox.

The goal of this post is to keep the instruction as minimal and simple as possible so it can be referenced over and over again.

Installation

Install Azure on NodeJS via NPM. I found this easier than doing a Windows installation through the web platform installer.

npm install azure-cli -g

The next step is to download the publish settings file:

azure account download

Once the publishsettings file has been download to your local directory, import the settings by doing the following. Ddon’t forget the quotes around the file name. After the successful import, delete the settings file.

azure account import "<file>.publishsettings"

Creating a new Azure website via command line is by doing the following. It will ask for the name of the site, location, username and password.

azure site create --git

It will make you go to Azure and login to your account. If you go to your newly created website, click on the ‘Deployments’ where the Git deployment has been created. The instruction for adding to Git will be there.

A change that you will have to make on the config.js (production configuration) is to set the port to use process.env.PORT.

server: {
    host: '127.0.0.1',
    port: process.env.PORT
}

Add the URL where your Git repository resides (instructions can be found here:

git remote add azure your_azure_git_end_point.git

Finally, since Azure looks for server.js file, you need to copy the index.js.

copy index.js server.js

After all the local file changes, commit your changes locally:

git add .
git commit -m "First commit"

And push to azure. This will ask for your Git repository password that you setup either on the command line or on the Azure website.

git push azure master

Browse the URL that you have created on the Azure server and make sure that it’s running properly. You will need to create a new login by visiting the admin login page, /ghost.

Update: March 17, 2015

Going through this motion once again when I updated Ghost to the newest version, I encountered an issue when pushing my git repository to azure. Git, fatal: The remote end hung up unexpectedly.

The reason for this is the huge amount of files that are being pushed up to the cloud. After doing some research, setting the git buffer should take care of this issue.

git config http.postBuffer 524288000

I spent numerous hours messing with command line and figuring out the deployment process for Octopress. I recently migrated all my blog posts from Menace Studio which was using the Orchard Project CMS. I really liked the CMS which is built around ASP.NET MVC and SQL Compact back end. My main issue with Orchard as well as other ones such as
WordPress is that they’re all trying to be an all in one solution (blogs, ecommerce, personal, etc.) which makes it really bulky.

The main reasons why I switched to Octopress is the fact that the site is entirely static generated content. The posts and pages are basically generated HTML from Markdown which makes the content easier to manage and back up. I use Dropbox to store the source files which includes the generated HTML to be deployed.

Deploying to either Heroku or GitHub pages using GitHub is pretty straight forward but using a custom domain can be a little challenging. This link for instance helped me get to setting up a custom domain for GitHub fairly quickly. I had a few stumbling blocks along the way and I wanted to write this post to bring all the information needed to deploying Octopress using a custom domain as quickly as possible.

Deployment instructions to GitHub pages and Heroku can be found here:

One thing that I would warn people about is to never setup/configure your local Git repository to deploy to both Heroku and GitHub. Maintenance will become a pain since you have to make sure that you’re consistently pushing changes to both remote locations. Otherwise, things will get out of sync and the only way to fix is to delete the .git directory. If you follow the instructions above correctly, things shoud go smoothly.

Custom domain using GitHub pages

Go to your domain name registrar (mine is GoDaddy) and add the settings below for the DNS.

  • Add A (Host) and point to 204.232.175.78.
  • Add a CNAME (Alias) for www and point to <username>.github.io.

To intiate a Git repository and setup the deployment to GitHub pages (as default).

rake setup_github_pages
rake generate
rake deploy
git add .
git commit -m "deploying"
git push origin master

Go to your <Octopress project>/source/ and add a CNAME file and put your custom domain name without the http or www (ie. menacestudio.com). When I made the changes above, it took more than 5 hours for the changes to take in effect so you have to be really patient (and yes, it does sucks when you have to wait for that long).

Custom domain using Heroku

The process for deploying to Heroku is pretty similar but the DNS resolution seems to take in effect very quick. Mine took literally a minute. I took the GitHub pages route in the beginning but decided to switch to Heroku since the setup for custom domain is easier. Pushing files to Heroku server seems to take 10 times longer since it tries to resolve dependencies.

Go to your domain name registrar and add the settings below for the DNS.

To create a new Heroku app and set as default remote repository. As a side note, for the heroku command to work,
you will have to install
Heroku Toolbelt which consists of tools that are helpful when deploying to Heroku server.

heroku create
git config branch.master.remote heroku

Pushing to Heroku is accomplished by performing the steps below.

rake generate
git add .
git commit -m "deploying"
git push heroku master

In addition to the Octopress documentation regarding Heroku custom domains, here’s the documentation by Heroku that goes into details regarding custom domains. A useful command to check the status of your DNS changes is nslookup <your_domain_name> which tells you which IP(s) your domain is pointing to.

I was trying to install Octopress and see what the buzz is about. I’m also a fan of static site generators such as
Scriptogr.am which I use for this blog and I wanted to explore other options such as Jekyll which is what
Octopress is based off.
This blog post is meant to document my experience/journey with the installation, and things that I encountered along the way
including how to fix them.
I also have been working lately with command prompt so I wanted a little bit of a challenge and see if I can do it. Granted,
it takes a little bit of work to get Octopress runnning so it requires a lot of patience, research and following the
documentation to the tee. I’m also running all these in Windows.

Octopress installation

While reading the setup documentation and
stepping through the installation process, I realized that I was using Ruby 1.8.7 (or at least the system only
recognize that version).
Even though I had 1.9.3 installed via the Ruby Installer for Windows, I had to make a copy of the Ruby
installation folder and move it to c:\ruby193 to make referencing things easier.

Since it was only recognizing 1.8.7, I had to install Pik so I can switch the Ruby version (ruby --v) to
1.9.3. Pik is basically a tool to manage multiple version of Ruby on Windows.
Using Ruby Gems gem install pick -i c:\ruby193 to install Pik in that location then having Pik recognize the path
to Ruby 1.9.3 using pick add c:\ruby193\bin.
To switch to 1.9.3, pik use 193 (193 is the option when looking at the Ruby versions installed in your system
pik list). As a reference, I used Node command prompt for performing these operations.

One last important change was I added the \bin path to the Windows environment variables. Ruby installer
also installed the version of Ruby in the program files, so I had to uninstall it (I already have a copy
on c:\ruby193) to fix the read-only installer when installing Ruby Gems on command line.

After taking care of the Ruby versioning and gem installation, I switched to Git bash for executing the commands for Octopress
and setting it up for GitHub.
In the middle of executing Octopress installation commands, after doing a bundle install, it complains that the "fast-stemmer"
native requires installed build tools. It gives an instruction on how to include the build tools and where
to download the DevKit. After installing the DevKit, I was able to run the bundle install again to
finish the installation of libraries and its dependencies (sass, compass, haml, jekyll, fast-stemmer, etc).

Hosting and deployment

I chose GitHub pages to host the blog since it’s free and a widely popular site within the web developer
community.

When doing a rake deploy, Github complained about "Git push failed – non-fast forward updates were rejected.
The issue was, the local files were out of sync with what’s in the remote repository. Issuing the git pull origin master
did not fix the issue so I had to delete the repository from Github and create a new one from scratch without the readme.md
file. After several attempts of pushing changes to Github and doing rake generate and rake deploy on the Octopress
project, everything is going well.

Creating post and pages

If you’re familiar with Ruby, the commands for doing deployment and creating posts is done using Rake which is used to
build stuff in Ruby. Creating a page
for instance for /about is by invoking rake new_page[about/index.markdown] and post by doing a rake new_post["title"].
Pages lives in the project/source and the posts within the _post of the same directory. Doing a rake generate compiles
the markdown files into HTML.

Some useful rake commands are:

rake generate (to compile pages and posts to HTML)
rake watch (regenerates/compiles SASS to CSS)
rake preview (to run the blog locally on http://localhost:4000)

Conclusion

I’m glad that I only have to do this once. Whew. My new Octopress blog can be viewed at menacestudio.github.io although
I’m probably going to move it to my own custom domain sometime in the near future.

I’ve been interested in Node.JS for quite some time now but I haven’t really realized its potential or find a use case. The
extent of my experience was basically creating a quick Express.JS app and see it run in the browser. No formal development whatsoever.
I have come across Sails.JS in the past but that’s when Node was just in its early stages, or at least before I was interested
in the platform. Sails.js coincides with or sits on top of Express.JS using the same MVC pattern as Rails.

I watched the introduction video again for Sails.JS today and was amazed at how powerful it is. The ease at creating a Model, JSON API
end point and by default and Socket IO are the main selling points for me. At the moment, there’s not a whole lot of examples in the
web so I’m going to share a few things that I’ve learned regarding the framework.

MVC

According to the Sails.JS documentation, the implementation is very similar to Rails and if you understand how MVC works, it should be easy
to decipher the code in their documentation. The way queries are made is functional in nature which I like but the best part of all these is that
it’s all in JavaScript.

After doing an install via NPM, I began to do some research on what the deployment strategy can be. Since I’m still new to Git and Heroku, I
picked up a few commands that are integral to the Sails.JS and Heroku deployment workflow (setting up Git in any case).

Git commands

Set global GitHub config.

$ git config --global github.user your_user_name

Create new Git repo and commit.

$ cd project_name && git init && git add . && git commit

Get the remote projects.

$ git remote -v

Push latest commit to Heroku.

$ git push heroku master

Heroku commands

Create a project (within the project directory).

$ heroku create or $ heroku create app_name`

Authenticate with Heroku server.

$ heroku login

Add your public key to Heroku.

$ heroku keys:add .ssh/id_rsa.pub

Add key automatically.

$ heroku keys:add

Get all projects in Heroku.

$ heroku list

Open the Heroku app in the browser.

$ heroku open

Log.

$ heroku log

Heroku issues on deployment

The two main issues that I’ve encountered while deploying are highlighted below. The application immediately throws a generic error
in which I have to use the command heroku logs to inspect the events.

H14 No web processes running. This is the error that shows when the web dynos is set to 0. Fixing it is a matter of scaling it
to 1 according to the Error Codes page.

$ heroku ps:scale web=1

Immediately after issuing that command, I was faced with another issue in which it complains about a non-existent web type.

No such type as web. This issue took me a while to figure out and after doing some research, I resolved it by adding a "Procfile"
to the root of my Sails JS application with the following code.

web: node app.js

Changing the /config/application.js to a port that Heroku understands (1337 being the default).

port: process.env.PORT || 1337,

After making those changes and issuing a git push heroku master, I had to issue the heroku ps:scale web=1 once again after the changes
has been formally deployed to Heroku.

Pushing to GitHub and Heroku repo simultaneously

Another helpful strategy is instead of issuing a separate git push master to Heroku and
Github is to push it all at once. The .git/config can be modified to do this.

[remote "heroku"]
    url = git@heroku.com:<heroku_repo>.git
    fetch = +refs/heads/*:refs/remotes/heroku/*

[remote "github"]
    url = git@github.com:<github_username>/<github_project>.git
    fetch = +refs/heads/*:refs/remotes/github/*

[remote "all"]
    url = git@github.com:<github_username>/<github_project>.git
    url = git@heroku.com:<heroku_repo>.git
    fetch = +refs/heads/*:refs/remotes/origin/*

Conclusion

After the SSH keys has been added to Heroku and Github (see GitHub and
Heroku documentations on generating a new key), everything is working great.
I also realized that you only need a single key for both accounts.

Many hours after setting up the source control environment later is when learning Sails JS/Node and fun begins.

[Git][git] is an [open source version control system][git-github] that has become popular among the open source community. Large companies
such as Google, Facebook and Microsoft has adopted it (well anything released as an open source are most likely hosted at [Github][gh]).
I have experience in both Subversion and [TFS version control][tfs] but Git is slightly different on how things are accomplished.
Since the cool kids tends to use the command line interpreter (Bash, Command Prompt, Terminal, etc) these days, it’s hard not to be intrigued
by this movement. There’s something about the black background and white font that fascinates me as well.

After playing around with [Git][git] for a few days, I’ve put together a list that breaks down the basic source control
operations as well as the underlying commands. This is more like a quick reference that I can always go back to.

This requires [Git][git] to be installed locally (as a global). Installing the [Github Windows][gh-windows] or Mac version will pretty much
install everything that you need including a GUI.
The Git team also created a handy [Git reference][gitref] for guidance when starting out or finding out how a command works. There’s also
a nice [Code School tutorial][try-git] to try out [Git][git] commands and quickly get up to speed with understanding the concepts.

The danger behind Git is executing a command without knowing the consequences of the action if you’re working with production code. I’m so
used to the [Team Foundation Server UI][tfs] that it can be hard not having a visual of the changes. At the same time, the thought of using
a command line brings back the old programmer side of me (although not that old either) ;-).

The intention of the next section is to summarize the more important aspects of Git and describe the commands in simple and grouped manner.
The notes for each command is my interpretation while trying to learn Git as a source control.

#### Initialization
Go outside the current directory and initialize

git init [project folder name]

#### Staging
Add. You need to browse to the working directory and perform add for new/modified file(s) for staging.

git add . // Adds all files and subdirectories on current directory to staging.
git add * // Alternatively adds all files but not subdirectories.
git add ‘*.js’ // To add all js files including in the subdirectories.
git add filea.js index.html // Or stage individual files.

Reset or “unstage”. This basically reverts the current state of the staging to before files were modified.

git reset HEAD –[file] // Undo last commit and unstage the file.
git reset –soft // Undo last commit.
git reset –hard // Undo last commit, unstage files and undo all changes in the working directory.

Remove.

git rm [file] // Removes from staging and delete file.
git rm [file] –cached // Removes from staging but keeps the file.

To undo a file and bring back the version from last commit.

git checkout –[file]

#### Getting Status and Log
To view GIT status. Any modified file within the current directory will show in status as well as whether the file(s) has been added to
staging or not.

git status -s // short output
git log // chronological order of commits to local repository

#### Commits
Commits are only committed to your local repository. To commit your staging snapshot.

git commit -m”Commit changes”

#### Pushing to Remote Repositories

git remote add origin git@github.com:[username]/[git_repo_name].git

The command below pushes the files to ‘origin’ (GitHub) and the default local branch is ‘master’.
The ‘u’ is used to remember the push parameter. Executing the command will setup to track any branch made remotely.

git push -u origin master

Any changes made on GitHub will be pulled down locally.

git pull origin master

#### Stash
Saves the current state for later and brings back last commit.

git stash
git stash list // Get all stashes
git stash apply // Applies latest stash OR add [stash_reference]
git stash drop [stash_reference] // Removes a stash

#### Branching

git branch [branch_name]
git branch // To view current branches.
git checkout [branch_name] // To switch branch.

#### Notes
– Commas and apostrophes works similarly.

[gitref]: http://gitref.org/index.html
[git]: http://git-scm.com/
[git-github]: https://github.com/git/git
[try-git]: http://try.github.io/levels/1/challenges/1
[tfs]: http://en.wikipedia.org/wiki/Team_Foundation_Server
[gh]: https://github.com/
[gh-windows]: http://windows.github.com/

Today, I was looking for a way to automatically save any emails that are labeled “Articles” to [Evernote][1] on my [Gmail][3] account.
I have filters to specifically target the email addresses that I’ve specified and automatically label them with “Articles”. This makes it
convenient for me so everything is in one place when I have time to read them. I’ve been using [Evernote][1] lately and happen to like
how it takes a snapshot of articles that I want to be saved and read it (even if the original source goes away).

There’s [Zapier][4] and [IFTTT][5] which I use for automating my Instagram, and other online services. I realized soon that
the emails being forwarded are in plain text which makes the articles unreadable and useless.

So the hunt continues for that service that will automate the archiving of articles from [Gmail][3] to [Evernote][1].
While searching, I encountered this [article][2] in which he use [Google Drive][7] to create [Google App scripts][6] to interact
with [Gmail][3]. I’ve seen several people used the [script][6] before but never had an opportunity to play around with it until now.
I decided to give it a shot and seems to be a perfect opportunity to play with something new ;-).
It basically uses JavaScript to call [Gmail classes][6] and the [Google script][8] interface allows you to debug and build projects
so it can be automated on a scheduled basis.

The script below was taken from this [article][2] and modifying a few parameters should work out of the box. The “Misc/Articles” is
a nested label that I use to tag incoming emails from specific sources identified by a filter that I created. The “special_evernote_email”
is a special email that [Evernote][1] assigned to you for convenience so you can send an email as a note to their service. You can find
this in your [Evernote settings][1] page. The last two strings are the hash tags that I want to use to identify those items in [Evernote][1].

The ‘forwardEmails’ function is the entry point on the script which then calls the ‘forwardThreads’ function passing in the custom parameters.
The only issue that I’ve had was regarding the label. The label that is provided has to exist, otherwise, ‘labels.getThreads()’ will receive
a null which terminates the execution of the script.

#### Steps
1. Go to your [Google Drive][7] and Create>Script (if Script is not available, you have to choose “connect more apps”).
2. This will open up the [Google Script][8] page which then you’ll create a new “Blank Project”.
3. Paste the code below and pick “forwardEmails” as the function to execute, then hit run to test that it works properly.
4. Click on the clock icon to create a trigger to schedule the function to run at specific times (hourly, etc).

#### The script

“`javascript
function forwardEmails() {
forwardThreads(“Misc/Articles”, “special_evernote_email@m.evernote.com”, “@Articles/Subscriptions #archived #articles”);
}

function forwardThreads(label, addr, subjSuffix) {
var maxSubjLength = 250;
var applylabel = GmailApp.getUserLabelByName(“EN_Archive”);

// Send individual and threaded emails.
var msgs, msg, i, j, subject, options, labels, page;
labels = GmailApp.getUserLabelByName(label);
var threads = labels.getThreads();
for (i=0; i < threads.length; i++) { msgs = threads[i].getMessages(); for (j=0; j < msgs.length; j++) { msg = msgs[j]; subject = msg.getSubject(); if (subject.length + subjSuffix.length > maxSubjLength) {
subject = subject.substring(0, maxSubjLength – subjSuffix.length);
}

options = { htmlBody: msg.getBody(), attachments : msg.getAttachments() };

GmailApp.sendEmail(addr, subject +” “+ subjSuffix, msg.getBody(), options);
}
}

while(!page || page.length == 100) {
/* Get the threads anywhere from 0 to 100. */
page = labels.getThreads(0, 100);

//pause to keep rates to gmail down (Goog apps errors)
Utilities.sleep(1000);

// Apply new label; move the thread out of other label
applylabel.addToThreads(page);
labels.removeFromThreads(page);
}
}
“`

[1]: https://evernote.com/
[2]: http://www.gavinadams.org/blog/2012/08/20/archiving-gmail-to-evernote/
[3]: http://gmail.com
[4]: https://zapier.com/
[5]: https://ifttt.com
[6]: https://developers.google.com/apps-script/reference/gmail/
[7]: https://drive.google.com
[8]: https://script.google.com

Weeks ago, I read a

I recently ran into an issue where I have a fairly nested directive and within the directive itself has an input that requires a decorator type directive such as for validation. As far as the title of the blog post, I figure that with each Angular 1.x release (2 in the near future) it’s probably best to tag these posts accordingly since each version introduces new syntax, etc and might not work if you’re still on the earlier version.

The problem

I’ll be using TypeScript for my example. For simplification, let’s say that you have a directive that validates an input. The directive is called, "validateInputInner" and we would like to use this in another directive.

return <ng.IDirective>{
    restrict: 'A',
    require: 'ngModel',
    link: link
};

function link($scope, $element, $attrs, ctrl) {
    var validateInput = (inputValue)=> {

        // some validation logic goes here...
        ctrl.$setValidity('validateInputInner', isValid);

        return inputValue;
    };

    ctrl.$parsers.unshift(validateInput);
    ctrl.$formatters.push(validateInput);

    // Observe attribute change
    attrs.$observe('validateInputInner', (comparisonModel)=> {
        return validateInput(ctrl.$viewValue);
    });
}

From a normal usage, the directive can be simply be used as:

<input type="text" validate-input-inner="{{vm.someModel}}" />

It gets a little more complex when you embed the same directive within another directive such as.

<test-component data-ng-model="vm.someModel" validate-input-inner="vm.secondaryModel"></test-component>

The second directive called testComponent will be using the previous directive as part of the component’s validation. The code for the testComponent is below. Please note the placement of the validate-input-inner.

Solution

We would like to use the testComponent directive as a wrapper component that exposes a scope property that feeds into the validation directive. We also could have re-used the existing model but for this example, we’ll assume that the validate-input-inner needs to validate another model in addition to the model.

testComponent.$inject = ['$compile'];
function testComponent(
    $compile: ng.ICompileService): ng.IDirective {
    return <ng.IDirective>{
        restrict: 'E',
        replace: true,
        require: 'ngModel',
        scope: {
            model: '=ngModel',
            validateInputInner: '=?',
        },
        link: link,
        template: `<div>
            <input type="text" class="form-control" 
            data-ng-model="model" validate-input-inner="{{validateInputInner}}"  /></div>`
    };
    // more code...

You might assume that this will work "as is" since the testComponent is using the same approach as it was by itself, and the scope property gets funneled down the directive itself.

Surprisingly enough, it doesn’t. The validate-input-inner works by itself but it but becomes unaware when inside a template based directive. The validate-input-inner could have been re-written another way perhaps to use $watch as opposed to $observe but given the scenario that we’re in, one way that I found to make it work is to use a $watch insite the testComponent itself.

We will need to inject $compile which is an Angular way to dynamically compile a string into a usable DOM element. In our case, the input element is nested within a div which is why need a reference to it using jqLite.

function link($scope, $element, $attrs) {
    var $input = $element.children('input');

    $scope.$watch('validateInputInner',(val) => {
        $input.attr('validate-input-inner', val);
        $compile($input)($scope);
    });
}

I added $scope.$watch and call $compile to refresh the validateInputInner‘s state. Please keep in mind that this operation is DOM intensive and is not always the best solution. This is one way out of (I’m sure) hundreds of ways of solving this. I’ll be exploring more ways to solve this and expand on this scenario in the future.

In the mean time, if you have suggestions or other ways to solve this, please feel free to comment or contact me.

Getting started on Ionic actually isn’t a straight-forward process. While the technology being used is basic HTML, CSS and JavaScript, the underlying process still requires Android and/or iOS. I will focus on the Android piece but most of the information can easily translate to IOS.

This post assumes that Node JS and NPM are installed in your system. I’m also using Windows 8 so your file path may vary.

To get started with development on Ionic, there’s a few things that you will need which is composed of 3 main parts. The first part would be Apache Cordova itself which is a framework that allows you to use HTML, CSS and JavaScript to build the application. The advantage is, it gives you a standardized API to build on multiple mobile platforms (Android, iOS, etc) using a single code base.

The second part is the installation, or at least having the proper environment and/or SDK that the Cordova can communicate with. This part takes the longest and where there’s headache involved if you’re not familiar with the process. The third part is the tooling which uses a combination of command line (to serve, deploy, emulate, etc.) as well as the actual IDE to write the code with.

This blog post is not meant as a exhaustive guide as there’s tons of information involved. I will focus on the 3 main parts of the installation and will try to be as focused and straightforward as possible.

1. Ionic framework setup

To install Ionic and Cordova globally using NPM.

npm install ionic cordova -g

2. Environment setup

If you are developing in Android like I am, you will have to install a few things. First and foremost, you will need the Java SE Development Kit. The next step is to install the Android SDK. I chose to use the Android Studio which includes the tools+ the SDK. Next is to install WinAnt which is a Windows installer for Apache Ant. Apache Ant is a Java command line tool that will be used to build the *.apk file for deployment to an actual Android device.

After the installation, the Android environment variable will need to be added to the system path for easy access. This can be set in the control panel > system > advanced system settings > environment variables > path.

C:\Users\<User>\AppData\Local\Android\sdk\platform-tools

You will also need to add the Android package to build with. This can be done through the Android SDK Manager and by selecting and downloading the versions.

Alternative: Ionic Box

I won’t go into the details of using Ionic Box but it’s an alternative. Ionic Box is a lightweight ready-made environment to avoid the hassle of configuring Java and Android SDK altogether in a Windows environment. It requires VirtualBox and Vagrant to simulate an environment for building with Ionic and Cordova. The VirtualBox is a tool to create quick virtual machine environments and in conjuntion uses VirtualBox for the VM itself.

After you have downloaded Vagrant from GitHub, you can use the command prompt to get into that directory then type the following command to run, download and setup the environment. This will install a Ubuntu VM and configure it within VirtualBox itself. (note that this might take some time at first as this will download a few dependencies to run the environment)

3. Tooling and Ionic commands

To create a new app. Template options are blank, tabs (default) and sidemenu

ionic start <app name> <optional template name>

To configure the platform for Android (ios if you’re building for iOS)

ionic platform add android

To change the URL on where to serve up the environment

ionic address

Basic Ionic commands available

To test and make sure that everything has been installed properly as far as communicating with the emulator is concern (adb = Android Debug Bridge), open the command prompt and type

adb

To build. This step is required prior to emulating or running on actual device. This creates the *.apk files.

ionic build android 

Testing

The command for spinning up an Ionic server instance

ionic serve    

In addition, if you want to launch a side-by-side iOS and Android browser emulation

ionic serve --lab

There’s a project called Ripple Emulator which allows you to emulate on different devices via Chrome, you can install it via NPM, then run it.

npm install -g ripple-emulator
ripple emulate --path platforms/android/assets/www

To emulate in the Android environment and launch the app.

ionic emulate android

To run on an actual device (it will fallback to running on emulation mode if a device is not detected).

ionic run android

Tools

If you’re using Visual Studio as your IDE, there’s Visual Studio tools for Apache Cordova which has some built-in tools for debugging, emulating, creating new mobile project, etc. I also discovered the Telerik AppBuilder last week which I personally haven’t tried yet. I will have to do a trial and see if I find it beneficial to quickly build an app.

Lastly, if all you care about is just building the app and is OK with debugging in the browser, all you need is an IDE like Sublime or Webstorm.

On my next post, I will focus on the actual development in Ionic. I hope that you find this helpful and informative. Feel free to contact me for any questions.

I’ve installed web applications on various IIS versions on different Windows platforms and I always find some task to be annoying. Here are some common issues and how to get it resolved. This post is specific to IIS 6 and Windows 8. I will keep this post up to date as possible as I encounter or think about them.

Problem 1

  • By default, if you’re using an App Pool that is set to ApplicationPoolIdentity as the process model identity, you will get an error if your connection string is set to IntegratedSecurity=true. This means that the authentication is tied to the local credentials.

    Solutions

    a.) Set it to false and configure/grant an actual user to connect to your local SQL Database instance. This can be configured in SQL Server Management Studio (SSMS).

    b.) Set the process model identity in the Application Pool, instead of built-in account to a custom account using your Windows credentials.

Problem 2

  • If you’re getting a 401 (unauthorized access) to static resources (CSS, JS, etc.), this means that the default account for IIS doesn’t have the permission to read these files.

    Solutions

    a.) You can go to IIS manager and select the website and go to the Authentication > Anonymous Authentication and make sure that it’s enabled (at a minimum, it needs to be enabled) AND set to a user that has permission. By default, it uses IUSR for permitting anonymous access.

    b.) You can also go to the website project itself in your local directory and add IUSR to the list of accounts that are permitted to read the website directory. Right click on the project and Properties > Security > Edit. By going with this approach, you can keep the anonymous authentication to the Application pool identity since the permission is given or set to the built-in user account.

Side notes

  • In Windows 8, the command aspnet_regiis -i doesn’t work anymore so if you don’t have ASP.NET 4.x installed, adding it can be accomplished by going to the Programs and Features > Turn Windows features on or off then look for ASP.NET 4.x. Feel free to refer to this article for more information about this.

Feel free to comment or offer some insight if you find this post valuable or have encountered issues outside of what I highlighted in this post.

If you’re using StructureMap 3.1.x for your IoC container for .NET, you might have encountered the message, "StructureMap.ObjectFactory is obsolete: 'ObjectFactory will be removed in a future 4.0 release of StructureMap. Favor the usage of Container class for future work'"

The old way of configuring the StructureMap dependency resolver was to do something like:

public IContainer Container
{
    get
    {
    return (IContainer)HttpContext.Current.Items["_Container"];
    }
    set
    {
    HttpContext.Current.Items["_Container"] = value;
    }
}

DependencyResolver.SetResolver(new SMDependencyResolver(() => Container ?? ObjectFactory.Container));

            ObjectFactory.Configure(cfg =>
            {
                cfg.AddRegistry(new StandardRegistry());
                cfg.AddRegistry(new ControllerRegistry());
                cfg.AddRegistry(new ActionFilterRegistry(() => Container ?? ObjectFactory.Container));
                cfg.AddRegistry(new MvcRegistry());
                cfg.AddRegistry(new TaskRegistry());
                cfg.AddRegistry(new ModelMetadataRegistry());
            });

The object factory then allows the registry of dependencies either in your global.asax or some static class instance within your solution.

In order to get rid of this message, one possible solution that I’ve found online was the pass in an instance of IContainer in the controller if you’re using ASP.NET MVC or Web API.

public class MyController
{
    public MyController(IContainer container)
    {    
    }
}

Perhaps a better (or cleaner approach) is to re-define a new ObjectFactory that a returns an IContainer static instance.

public static class ObjectFactory
{
  private static readonly Lazy<Container> _containerBuilder =
  new Lazy<Container>(defaultContainer, LazyThreadSafetyMode.ExecutionAndPublication);

  public static IContainer Container
  {
    get { return _containerBuilder.Value; }
  }

  private static Container defaultContainer()
  {
    return new Container(x =>
    {
        x.AddRegistry(new StandardRegistry());
        cfg.AddRegistry(new ControllerRegistry());
        x.AddRegistry(new ActionFilterRegistry(
          () => Container ?? Infrastructure.ObjectFactory.Container));
        cfg.AddRegistry(new MvcRegistry());
        x.AddRegistry(new TaskRegistry());
        cfg.AddRegistry(new ModelMetadataRegistry());
    });
  }
}

In the global.asax, you can get the ObjectFactory.Container instance using:

var container = Infrastructure.ObjectFactory.Container;

The container variable can then be used to extend and configure additional dependencies and settings.

[nathan] that I found off of [Hacker News][hn] which apparently was published many years ago.
It was something to the effect that everyone should blog even if you don’t have any readers. It makes a great argument that writing
down your ideas makes them concrete (it adds a personal value) as well as making you a better reader as opposed to just keeping them in your head.

The concept is similar to telling or teaching someone what you just learned and recalling every bits of details to make the other person
understand. It forces your mind to think and makes a good challenge to yourself in finding out how much you really know about the subject.
I find that this is also true when it comes to presentations or writing a book (not that I’ve written one) but having to present those ideas
to an audience requires you to research and dig deeper to fill in the gaps.
The transfer of knowledge is valuable to the receiver (readers or listeners) as well the person telling the story.

I subscribe to tons of blogs and knowledgeable people in the web development industry, and read tons of articles on a daily basis. I also
buy digital books from sources such as [LeanPub][leanpub] from time to time just to read up on new concepts and ideas. I love the fact that
websites like [LeanPub][leanpub] allows any person to publish and share the ideas with the world in a much simpler process. There’s no
publisher that gets in the way of the publishing process — and the subscribers gets any updates instantaneously.

Anyone with a passion towards a subject should blog about it because it’s good for you and everyone!
In terms of knowledge about the subject, it’s been said that you don’t really own what you know until you can repeat it and say it out loud.
Having to write about what you know will make you realize how much you know and don’t know, plain and simple.

[nathan]: http://nathanmarz.com/blog/you-should-blog-even-if-you-have-no-readers.html
[hn]: https://news.ycombinator.com/
[leanpub]: https://leanpub.com/
[tumblr]: http://www.tumblr.com/
[wp]: http://wordpress.org/