Category

Dev

Category

Installing Ghost on Azure is simple and can be done by creating a new website > gallery. As of the current writing, Azure installs the 0.3.3 version and the latest Ghost that’s out is 0.4. You can go down that route or take the manual route which I will cover in this post. I also tried setting it up from the Azure gallery but it was too easy for my taste, and besides, I like having my files backed up in Dropbox.

The goal of this post is to keep the instruction as minimal and simple as possible so it can be referenced over and over again.

Installation

Install Azure on NodeJS via NPM. I found this easier than doing a Windows installation through the web platform installer.

npm install azure-cli -g

The next step is to download the publish settings file:

azure account download

Once the publishsettings file has been download to your local directory, import the settings by doing the following. Ddon’t forget the quotes around the file name. After the successful import, delete the settings file.

azure account import "<file>.publishsettings"

Creating a new Azure website via command line is by doing the following. It will ask for the name of the site, location, username and password.

azure site create --git

It will make you go to Azure and login to your account. If you go to your newly created website, click on the ‘Deployments’ where the Git deployment has been created. The instruction for adding to Git will be there.

A change that you will have to make on the config.js (production configuration) is to set the port to use process.env.PORT.

server: {
    host: '127.0.0.1',
    port: process.env.PORT
}

Add the URL where your Git repository resides (instructions can be found here:

git remote add azure your_azure_git_end_point.git

Finally, since Azure looks for server.js file, you need to copy the index.js.

copy index.js server.js

After all the local file changes, commit your changes locally:

git add .
git commit -m "First commit"

And push to azure. This will ask for your Git repository password that you setup either on the command line or on the Azure website.

git push azure master

Browse the URL that you have created on the Azure server and make sure that it’s running properly. You will need to create a new login by visiting the admin login page, /ghost.

Update: March 17, 2015

Going through this motion once again when I updated Ghost to the newest version, I encountered an issue when pushing my git repository to azure. Git, fatal: The remote end hung up unexpectedly.

The reason for this is the huge amount of files that are being pushed up to the cloud. After doing some research, setting the git buffer should take care of this issue.

git config http.postBuffer 524288000

I spent numerous hours messing with command line and figuring out the deployment process for Octopress. I recently migrated all my blog posts from Menace Studio which was using the Orchard Project CMS. I really liked the CMS which is built around ASP.NET MVC and SQL Compact back end. My main issue with Orchard as well as other ones such as
WordPress is that they’re all trying to be an all in one solution (blogs, ecommerce, personal, etc.) which makes it really bulky.

The main reasons why I switched to Octopress is the fact that the site is entirely static generated content. The posts and pages are basically generated HTML from Markdown which makes the content easier to manage and back up. I use Dropbox to store the source files which includes the generated HTML to be deployed.

Deploying to either Heroku or GitHub pages using GitHub is pretty straight forward but using a custom domain can be a little challenging. This link for instance helped me get to setting up a custom domain for GitHub fairly quickly. I had a few stumbling blocks along the way and I wanted to write this post to bring all the information needed to deploying Octopress using a custom domain as quickly as possible.

Deployment instructions to GitHub pages and Heroku can be found here:

One thing that I would warn people about is to never setup/configure your local Git repository to deploy to both Heroku and GitHub. Maintenance will become a pain since you have to make sure that you’re consistently pushing changes to both remote locations. Otherwise, things will get out of sync and the only way to fix is to delete the .git directory. If you follow the instructions above correctly, things shoud go smoothly.

Custom domain using GitHub pages

Go to your domain name registrar (mine is GoDaddy) and add the settings below for the DNS.

  • Add A (Host) and point to 204.232.175.78.
  • Add a CNAME (Alias) for www and point to <username>.github.io.

To intiate a Git repository and setup the deployment to GitHub pages (as default).

rake setup_github_pages
rake generate
rake deploy
git add .
git commit -m "deploying"
git push origin master

Go to your <Octopress project>/source/ and add a CNAME file and put your custom domain name without the http or www (ie. menacestudio.com). When I made the changes above, it took more than 5 hours for the changes to take in effect so you have to be really patient (and yes, it does sucks when you have to wait for that long).

Custom domain using Heroku

The process for deploying to Heroku is pretty similar but the DNS resolution seems to take in effect very quick. Mine took literally a minute. I took the GitHub pages route in the beginning but decided to switch to Heroku since the setup for custom domain is easier. Pushing files to Heroku server seems to take 10 times longer since it tries to resolve dependencies.

Go to your domain name registrar and add the settings below for the DNS.

To create a new Heroku app and set as default remote repository. As a side note, for the heroku command to work,
you will have to install
Heroku Toolbelt which consists of tools that are helpful when deploying to Heroku server.

heroku create
git config branch.master.remote heroku

Pushing to Heroku is accomplished by performing the steps below.

rake generate
git add .
git commit -m "deploying"
git push heroku master

In addition to the Octopress documentation regarding Heroku custom domains, here’s the documentation by Heroku that goes into details regarding custom domains. A useful command to check the status of your DNS changes is nslookup <your_domain_name> which tells you which IP(s) your domain is pointing to.

I was trying to install Octopress and see what the buzz is about. I’m also a fan of static site generators such as
Scriptogr.am which I use for this blog and I wanted to explore other options such as Jekyll which is what
Octopress is based off.
This blog post is meant to document my experience/journey with the installation, and things that I encountered along the way
including how to fix them.
I also have been working lately with command prompt so I wanted a little bit of a challenge and see if I can do it. Granted,
it takes a little bit of work to get Octopress runnning so it requires a lot of patience, research and following the
documentation to the tee. I’m also running all these in Windows.

Octopress installation

While reading the setup documentation and
stepping through the installation process, I realized that I was using Ruby 1.8.7 (or at least the system only
recognize that version).
Even though I had 1.9.3 installed via the Ruby Installer for Windows, I had to make a copy of the Ruby
installation folder and move it to c:\ruby193 to make referencing things easier.

Since it was only recognizing 1.8.7, I had to install Pik so I can switch the Ruby version (ruby --v) to
1.9.3. Pik is basically a tool to manage multiple version of Ruby on Windows.
Using Ruby Gems gem install pick -i c:\ruby193 to install Pik in that location then having Pik recognize the path
to Ruby 1.9.3 using pick add c:\ruby193\bin.
To switch to 1.9.3, pik use 193 (193 is the option when looking at the Ruby versions installed in your system
pik list). As a reference, I used Node command prompt for performing these operations.

One last important change was I added the \bin path to the Windows environment variables. Ruby installer
also installed the version of Ruby in the program files, so I had to uninstall it (I already have a copy
on c:\ruby193) to fix the read-only installer when installing Ruby Gems on command line.

After taking care of the Ruby versioning and gem installation, I switched to Git bash for executing the commands for Octopress
and setting it up for GitHub.
In the middle of executing Octopress installation commands, after doing a bundle install, it complains that the "fast-stemmer"
native requires installed build tools. It gives an instruction on how to include the build tools and where
to download the DevKit. After installing the DevKit, I was able to run the bundle install again to
finish the installation of libraries and its dependencies (sass, compass, haml, jekyll, fast-stemmer, etc).

Hosting and deployment

I chose GitHub pages to host the blog since it’s free and a widely popular site within the web developer
community.

When doing a rake deploy, Github complained about "Git push failed – non-fast forward updates were rejected.
The issue was, the local files were out of sync with what’s in the remote repository. Issuing the git pull origin master
did not fix the issue so I had to delete the repository from Github and create a new one from scratch without the readme.md
file. After several attempts of pushing changes to Github and doing rake generate and rake deploy on the Octopress
project, everything is going well.

Creating post and pages

If you’re familiar with Ruby, the commands for doing deployment and creating posts is done using Rake which is used to
build stuff in Ruby. Creating a page
for instance for /about is by invoking rake new_page[about/index.markdown] and post by doing a rake new_post["title"].
Pages lives in the project/source and the posts within the _post of the same directory. Doing a rake generate compiles
the markdown files into HTML.

Some useful rake commands are:

rake generate (to compile pages and posts to HTML)
rake watch (regenerates/compiles SASS to CSS)
rake preview (to run the blog locally on http://localhost:4000)

Conclusion

I’m glad that I only have to do this once. Whew. My new Octopress blog can be viewed at menacestudio.github.io although
I’m probably going to move it to my own custom domain sometime in the near future.

I’ve been interested in Node.JS for quite some time now but I haven’t really realized its potential or find a use case. The
extent of my experience was basically creating a quick Express.JS app and see it run in the browser. No formal development whatsoever.
I have come across Sails.JS in the past but that’s when Node was just in its early stages, or at least before I was interested
in the platform. Sails.js coincides with or sits on top of Express.JS using the same MVC pattern as Rails.

I watched the introduction video again for Sails.JS today and was amazed at how powerful it is. The ease at creating a Model, JSON API
end point and by default and Socket IO are the main selling points for me. At the moment, there’s not a whole lot of examples in the
web so I’m going to share a few things that I’ve learned regarding the framework.

MVC

According to the Sails.JS documentation, the implementation is very similar to Rails and if you understand how MVC works, it should be easy
to decipher the code in their documentation. The way queries are made is functional in nature which I like but the best part of all these is that
it’s all in JavaScript.

After doing an install via NPM, I began to do some research on what the deployment strategy can be. Since I’m still new to Git and Heroku, I
picked up a few commands that are integral to the Sails.JS and Heroku deployment workflow (setting up Git in any case).

Git commands

Set global GitHub config.

$ git config --global github.user your_user_name

Create new Git repo and commit.

$ cd project_name && git init && git add . && git commit

Get the remote projects.

$ git remote -v

Push latest commit to Heroku.

$ git push heroku master

Heroku commands

Create a project (within the project directory).

$ heroku create or $ heroku create app_name`

Authenticate with Heroku server.

$ heroku login

Add your public key to Heroku.

$ heroku keys:add .ssh/id_rsa.pub

Add key automatically.

$ heroku keys:add

Get all projects in Heroku.

$ heroku list

Open the Heroku app in the browser.

$ heroku open

Log.

$ heroku log

Heroku issues on deployment

The two main issues that I’ve encountered while deploying are highlighted below. The application immediately throws a generic error
in which I have to use the command heroku logs to inspect the events.

H14 No web processes running. This is the error that shows when the web dynos is set to 0. Fixing it is a matter of scaling it
to 1 according to the Error Codes page.

$ heroku ps:scale web=1

Immediately after issuing that command, I was faced with another issue in which it complains about a non-existent web type.

No such type as web. This issue took me a while to figure out and after doing some research, I resolved it by adding a "Procfile"
to the root of my Sails JS application with the following code.

web: node app.js

Changing the /config/application.js to a port that Heroku understands (1337 being the default).

port: process.env.PORT || 1337,

After making those changes and issuing a git push heroku master, I had to issue the heroku ps:scale web=1 once again after the changes
has been formally deployed to Heroku.

Pushing to GitHub and Heroku repo simultaneously

Another helpful strategy is instead of issuing a separate git push master to Heroku and
Github is to push it all at once. The .git/config can be modified to do this.

[remote "heroku"]
    url = git@heroku.com:<heroku_repo>.git
    fetch = +refs/heads/*:refs/remotes/heroku/*

[remote "github"]
    url = git@github.com:<github_username>/<github_project>.git
    fetch = +refs/heads/*:refs/remotes/github/*

[remote "all"]
    url = git@github.com:<github_username>/<github_project>.git
    url = git@heroku.com:<heroku_repo>.git
    fetch = +refs/heads/*:refs/remotes/origin/*

Conclusion

After the SSH keys has been added to Heroku and Github (see GitHub and
Heroku documentations on generating a new key), everything is working great.
I also realized that you only need a single key for both accounts.

Many hours after setting up the source control environment later is when learning Sails JS/Node and fun begins.

[Git][git] is an [open source version control system][git-github] that has become popular among the open source community. Large companies
such as Google, Facebook and Microsoft has adopted it (well anything released as an open source are most likely hosted at [Github][gh]).
I have experience in both Subversion and [TFS version control][tfs] but Git is slightly different on how things are accomplished.
Since the cool kids tends to use the command line interpreter (Bash, Command Prompt, Terminal, etc) these days, it’s hard not to be intrigued
by this movement. There’s something about the black background and white font that fascinates me as well.

After playing around with [Git][git] for a few days, I’ve put together a list that breaks down the basic source control
operations as well as the underlying commands. This is more like a quick reference that I can always go back to.

This requires [Git][git] to be installed locally (as a global). Installing the [Github Windows][gh-windows] or Mac version will pretty much
install everything that you need including a GUI.
The Git team also created a handy [Git reference][gitref] for guidance when starting out or finding out how a command works. There’s also
a nice [Code School tutorial][try-git] to try out [Git][git] commands and quickly get up to speed with understanding the concepts.

The danger behind Git is executing a command without knowing the consequences of the action if you’re working with production code. I’m so
used to the [Team Foundation Server UI][tfs] that it can be hard not having a visual of the changes. At the same time, the thought of using
a command line brings back the old programmer side of me (although not that old either) ;-).

The intention of the next section is to summarize the more important aspects of Git and describe the commands in simple and grouped manner.
The notes for each command is my interpretation while trying to learn Git as a source control.

#### Initialization
Go outside the current directory and initialize

git init [project folder name]

#### Staging
Add. You need to browse to the working directory and perform add for new/modified file(s) for staging.

git add . // Adds all files and subdirectories on current directory to staging.
git add * // Alternatively adds all files but not subdirectories.
git add ‘*.js’ // To add all js files including in the subdirectories.
git add filea.js index.html // Or stage individual files.

Reset or “unstage”. This basically reverts the current state of the staging to before files were modified.

git reset HEAD –[file] // Undo last commit and unstage the file.
git reset –soft // Undo last commit.
git reset –hard // Undo last commit, unstage files and undo all changes in the working directory.

Remove.

git rm [file] // Removes from staging and delete file.
git rm [file] –cached // Removes from staging but keeps the file.

To undo a file and bring back the version from last commit.

git checkout –[file]

#### Getting Status and Log
To view GIT status. Any modified file within the current directory will show in status as well as whether the file(s) has been added to
staging or not.

git status -s // short output
git log // chronological order of commits to local repository

#### Commits
Commits are only committed to your local repository. To commit your staging snapshot.

git commit -m”Commit changes”

#### Pushing to Remote Repositories

git remote add origin git@github.com:[username]/[git_repo_name].git

The command below pushes the files to ‘origin’ (GitHub) and the default local branch is ‘master’.
The ‘u’ is used to remember the push parameter. Executing the command will setup to track any branch made remotely.

git push -u origin master

Any changes made on GitHub will be pulled down locally.

git pull origin master

#### Stash
Saves the current state for later and brings back last commit.

git stash
git stash list // Get all stashes
git stash apply // Applies latest stash OR add [stash_reference]
git stash drop [stash_reference] // Removes a stash

#### Branching

git branch [branch_name]
git branch // To view current branches.
git checkout [branch_name] // To switch branch.

#### Notes
– Commas and apostrophes works similarly.

[gitref]: http://gitref.org/index.html
[git]: http://git-scm.com/
[git-github]: https://github.com/git/git
[try-git]: http://try.github.io/levels/1/challenges/1
[tfs]: http://en.wikipedia.org/wiki/Team_Foundation_Server
[gh]: https://github.com/
[gh-windows]: http://windows.github.com/

Today, I was looking for a way to automatically save any emails that are labeled “Articles” to [Evernote][1] on my [Gmail][3] account.
I have filters to specifically target the email addresses that I’ve specified and automatically label them with “Articles”. This makes it
convenient for me so everything is in one place when I have time to read them. I’ve been using [Evernote][1] lately and happen to like
how it takes a snapshot of articles that I want to be saved and read it (even if the original source goes away).

There’s [Zapier][4] and [IFTTT][5] which I use for automating my Instagram, and other online services. I realized soon that
the emails being forwarded are in plain text which makes the articles unreadable and useless.

So the hunt continues for that service that will automate the archiving of articles from [Gmail][3] to [Evernote][1].
While searching, I encountered this [article][2] in which he use [Google Drive][7] to create [Google App scripts][6] to interact
with [Gmail][3]. I’ve seen several people used the [script][6] before but never had an opportunity to play around with it until now.
I decided to give it a shot and seems to be a perfect opportunity to play with something new ;-).
It basically uses JavaScript to call [Gmail classes][6] and the [Google script][8] interface allows you to debug and build projects
so it can be automated on a scheduled basis.

The script below was taken from this [article][2] and modifying a few parameters should work out of the box. The “Misc/Articles” is
a nested label that I use to tag incoming emails from specific sources identified by a filter that I created. The “special_evernote_email”
is a special email that [Evernote][1] assigned to you for convenience so you can send an email as a note to their service. You can find
this in your [Evernote settings][1] page. The last two strings are the hash tags that I want to use to identify those items in [Evernote][1].

The ‘forwardEmails’ function is the entry point on the script which then calls the ‘forwardThreads’ function passing in the custom parameters.
The only issue that I’ve had was regarding the label. The label that is provided has to exist, otherwise, ‘labels.getThreads()’ will receive
a null which terminates the execution of the script.

#### Steps
1. Go to your [Google Drive][7] and Create>Script (if Script is not available, you have to choose “connect more apps”).
2. This will open up the [Google Script][8] page which then you’ll create a new “Blank Project”.
3. Paste the code below and pick “forwardEmails” as the function to execute, then hit run to test that it works properly.
4. Click on the clock icon to create a trigger to schedule the function to run at specific times (hourly, etc).

#### The script

“`javascript
function forwardEmails() {
forwardThreads(“Misc/Articles”, “special_evernote_email@m.evernote.com”, “@Articles/Subscriptions #archived #articles”);
}

function forwardThreads(label, addr, subjSuffix) {
var maxSubjLength = 250;
var applylabel = GmailApp.getUserLabelByName(“EN_Archive”);

// Send individual and threaded emails.
var msgs, msg, i, j, subject, options, labels, page;
labels = GmailApp.getUserLabelByName(label);
var threads = labels.getThreads();
for (i=0; i < threads.length; i++) { msgs = threads[i].getMessages(); for (j=0; j < msgs.length; j++) { msg = msgs[j]; subject = msg.getSubject(); if (subject.length + subjSuffix.length > maxSubjLength) {
subject = subject.substring(0, maxSubjLength – subjSuffix.length);
}

options = { htmlBody: msg.getBody(), attachments : msg.getAttachments() };

GmailApp.sendEmail(addr, subject +” “+ subjSuffix, msg.getBody(), options);
}
}

while(!page || page.length == 100) {
/* Get the threads anywhere from 0 to 100. */
page = labels.getThreads(0, 100);

//pause to keep rates to gmail down (Goog apps errors)
Utilities.sleep(1000);

// Apply new label; move the thread out of other label
applylabel.addToThreads(page);
labels.removeFromThreads(page);
}
}
“`

[1]: https://evernote.com/
[2]: http://www.gavinadams.org/blog/2012/08/20/archiving-gmail-to-evernote/
[3]: http://gmail.com
[4]: https://zapier.com/
[5]: https://ifttt.com
[6]: https://developers.google.com/apps-script/reference/gmail/
[7]: https://drive.google.com
[8]: https://script.google.com

Weeks ago, I read a

I recently ran into an issue where I have a fairly nested directive and within the directive itself has an input that requires a decorator type directive such as for validation. As far as the title of the blog post, I figure that with each Angular 1.x release (2 in the near future) it’s probably best to tag these posts accordingly since each version introduces new syntax, etc and might not work if you’re still on the earlier version.

The problem

I’ll be using TypeScript for my example. For simplification, let’s say that you have a directive that validates an input. The directive is called, "validateInputInner" and we would like to use this in another directive.

return <ng.IDirective>{
    restrict: 'A',
    require: 'ngModel',
    link: link
};

function link($scope, $element, $attrs, ctrl) {
    var validateInput = (inputValue)=> {

        // some validation logic goes here...
        ctrl.$setValidity('validateInputInner', isValid);

        return inputValue;
    };

    ctrl.$parsers.unshift(validateInput);
    ctrl.$formatters.push(validateInput);

    // Observe attribute change
    attrs.$observe('validateInputInner', (comparisonModel)=> {
        return validateInput(ctrl.$viewValue);
    });
}

From a normal usage, the directive can be simply be used as:

<input type="text" validate-input-inner="{{vm.someModel}}" />

It gets a little more complex when you embed the same directive within another directive such as.

<test-component data-ng-model="vm.someModel" validate-input-inner="vm.secondaryModel"></test-component>

The second directive called testComponent will be using the previous directive as part of the component’s validation. The code for the testComponent is below. Please note the placement of the validate-input-inner.

Solution

We would like to use the testComponent directive as a wrapper component that exposes a scope property that feeds into the validation directive. We also could have re-used the existing model but for this example, we’ll assume that the validate-input-inner needs to validate another model in addition to the model.

testComponent.$inject = ['$compile'];
function testComponent(
    $compile: ng.ICompileService): ng.IDirective {
    return <ng.IDirective>{
        restrict: 'E',
        replace: true,
        require: 'ngModel',
        scope: {
            model: '=ngModel',
            validateInputInner: '=?',
        },
        link: link,
        template: `<div>
            <input type="text" class="form-control" 
            data-ng-model="model" validate-input-inner="{{validateInputInner}}"  /></div>`
    };
    // more code...

You might assume that this will work "as is" since the testComponent is using the same approach as it was by itself, and the scope property gets funneled down the directive itself.

Surprisingly enough, it doesn’t. The validate-input-inner works by itself but it but becomes unaware when inside a template based directive. The validate-input-inner could have been re-written another way perhaps to use $watch as opposed to $observe but given the scenario that we’re in, one way that I found to make it work is to use a $watch insite the testComponent itself.

We will need to inject $compile which is an Angular way to dynamically compile a string into a usable DOM element. In our case, the input element is nested within a div which is why need a reference to it using jqLite.

function link($scope, $element, $attrs) {
    var $input = $element.children('input');

    $scope.$watch('validateInputInner',(val) => {
        $input.attr('validate-input-inner', val);
        $compile($input)($scope);
    });
}

I added $scope.$watch and call $compile to refresh the validateInputInner‘s state. Please keep in mind that this operation is DOM intensive and is not always the best solution. This is one way out of (I’m sure) hundreds of ways of solving this. I’ll be exploring more ways to solve this and expand on this scenario in the future.

In the mean time, if you have suggestions or other ways to solve this, please feel free to comment or contact me.

Getting started on Ionic actually isn’t a straight-forward process. While the technology being used is basic HTML, CSS and JavaScript, the underlying process still requires Android and/or iOS. I will focus on the Android piece but most of the information can easily translate to IOS.

This post assumes that Node JS and NPM are installed in your system. I’m also using Windows 8 so your file path may vary.

To get started with development on Ionic, there’s a few things that you will need which is composed of 3 main parts. The first part would be Apache Cordova itself which is a framework that allows you to use HTML, CSS and JavaScript to build the application. The advantage is, it gives you a standardized API to build on multiple mobile platforms (Android, iOS, etc) using a single code base.

The second part is the installation, or at least having the proper environment and/or SDK that the Cordova can communicate with. This part takes the longest and where there’s headache involved if you’re not familiar with the process. The third part is the tooling which uses a combination of command line (to serve, deploy, emulate, etc.) as well as the actual IDE to write the code with.

This blog post is not meant as a exhaustive guide as there’s tons of information involved. I will focus on the 3 main parts of the installation and will try to be as focused and straightforward as possible.

1. Ionic framework setup

To install Ionic and Cordova globally using NPM.

npm install ionic cordova -g

2. Environment setup

If you are developing in Android like I am, you will have to install a few things. First and foremost, you will need the Java SE Development Kit. The next step is to install the Android SDK. I chose to use the Android Studio which includes the tools+ the SDK. Next is to install WinAnt which is a Windows installer for Apache Ant. Apache Ant is a Java command line tool that will be used to build the *.apk file for deployment to an actual Android device.

After the installation, the Android environment variable will need to be added to the system path for easy access. This can be set in the control panel > system > advanced system settings > environment variables > path.

C:\Users\<User>\AppData\Local\Android\sdk\platform-tools

You will also need to add the Android package to build with. This can be done through the Android SDK Manager and by selecting and downloading the versions.

Alternative: Ionic Box

I won’t go into the details of using Ionic Box but it’s an alternative. Ionic Box is a lightweight ready-made environment to avoid the hassle of configuring Java and Android SDK altogether in a Windows environment. It requires VirtualBox and Vagrant to simulate an environment for building with Ionic and Cordova. The VirtualBox is a tool to create quick virtual machine environments and in conjuntion uses VirtualBox for the VM itself.

After you have downloaded Vagrant from GitHub, you can use the command prompt to get into that directory then type the following command to run, download and setup the environment. This will install a Ubuntu VM and configure it within VirtualBox itself. (note that this might take some time at first as this will download a few dependencies to run the environment)

3. Tooling and Ionic commands

To create a new app. Template options are blank, tabs (default) and sidemenu

ionic start <app name> <optional template name>

To configure the platform for Android (ios if you’re building for iOS)

ionic platform add android

To change the URL on where to serve up the environment

ionic address

Basic Ionic commands available

To test and make sure that everything has been installed properly as far as communicating with the emulator is concern (adb = Android Debug Bridge), open the command prompt and type

adb

To build. This step is required prior to emulating or running on actual device. This creates the *.apk files.

ionic build android 

Testing

The command for spinning up an Ionic server instance

ionic serve    

In addition, if you want to launch a side-by-side iOS and Android browser emulation

ionic serve --lab

There’s a project called Ripple Emulator which allows you to emulate on different devices via Chrome, you can install it via NPM, then run it.

npm install -g ripple-emulator
ripple emulate --path platforms/android/assets/www

To emulate in the Android environment and launch the app.

ionic emulate android

To run on an actual device (it will fallback to running on emulation mode if a device is not detected).

ionic run android

Tools

If you’re using Visual Studio as your IDE, there’s Visual Studio tools for Apache Cordova which has some built-in tools for debugging, emulating, creating new mobile project, etc. I also discovered the Telerik AppBuilder last week which I personally haven’t tried yet. I will have to do a trial and see if I find it beneficial to quickly build an app.

Lastly, if all you care about is just building the app and is OK with debugging in the browser, all you need is an IDE like Sublime or Webstorm.

On my next post, I will focus on the actual development in Ionic. I hope that you find this helpful and informative. Feel free to contact me for any questions.

I’ve installed web applications on various IIS versions on different Windows platforms and I always find some task to be annoying. Here are some common issues and how to get it resolved. This post is specific to IIS 6 and Windows 8. I will keep this post up to date as possible as I encounter or think about them.

Problem 1

  • By default, if you’re using an App Pool that is set to ApplicationPoolIdentity as the process model identity, you will get an error if your connection string is set to IntegratedSecurity=true. This means that the authentication is tied to the local credentials.

    Solutions

    a.) Set it to false and configure/grant an actual user to connect to your local SQL Database instance. This can be configured in SQL Server Management Studio (SSMS).

    b.) Set the process model identity in the Application Pool, instead of built-in account to a custom account using your Windows credentials.

Problem 2

  • If you’re getting a 401 (unauthorized access) to static resources (CSS, JS, etc.), this means that the default account for IIS doesn’t have the permission to read these files.

    Solutions

    a.) You can go to IIS manager and select the website and go to the Authentication > Anonymous Authentication and make sure that it’s enabled (at a minimum, it needs to be enabled) AND set to a user that has permission. By default, it uses IUSR for permitting anonymous access.

    b.) You can also go to the website project itself in your local directory and add IUSR to the list of accounts that are permitted to read the website directory. Right click on the project and Properties > Security > Edit. By going with this approach, you can keep the anonymous authentication to the Application pool identity since the permission is given or set to the built-in user account.

Side notes

  • In Windows 8, the command aspnet_regiis -i doesn’t work anymore so if you don’t have ASP.NET 4.x installed, adding it can be accomplished by going to the Programs and Features > Turn Windows features on or off then look for ASP.NET 4.x. Feel free to refer to this article for more information about this.

Feel free to comment or offer some insight if you find this post valuable or have encountered issues outside of what I highlighted in this post.

If you’re using StructureMap 3.1.x for your IoC container for .NET, you might have encountered the message, "StructureMap.ObjectFactory is obsolete: 'ObjectFactory will be removed in a future 4.0 release of StructureMap. Favor the usage of Container class for future work'"

The old way of configuring the StructureMap dependency resolver was to do something like:

public IContainer Container
{
    get
    {
    return (IContainer)HttpContext.Current.Items["_Container"];
    }
    set
    {
    HttpContext.Current.Items["_Container"] = value;
    }
}

DependencyResolver.SetResolver(new SMDependencyResolver(() => Container ?? ObjectFactory.Container));

            ObjectFactory.Configure(cfg =>
            {
                cfg.AddRegistry(new StandardRegistry());
                cfg.AddRegistry(new ControllerRegistry());
                cfg.AddRegistry(new ActionFilterRegistry(() => Container ?? ObjectFactory.Container));
                cfg.AddRegistry(new MvcRegistry());
                cfg.AddRegistry(new TaskRegistry());
                cfg.AddRegistry(new ModelMetadataRegistry());
            });

The object factory then allows the registry of dependencies either in your global.asax or some static class instance within your solution.

In order to get rid of this message, one possible solution that I’ve found online was the pass in an instance of IContainer in the controller if you’re using ASP.NET MVC or Web API.

public class MyController
{
    public MyController(IContainer container)
    {    
    }
}

Perhaps a better (or cleaner approach) is to re-define a new ObjectFactory that a returns an IContainer static instance.

public static class ObjectFactory
{
  private static readonly Lazy<Container> _containerBuilder =
  new Lazy<Container>(defaultContainer, LazyThreadSafetyMode.ExecutionAndPublication);

  public static IContainer Container
  {
    get { return _containerBuilder.Value; }
  }

  private static Container defaultContainer()
  {
    return new Container(x =>
    {
        x.AddRegistry(new StandardRegistry());
        cfg.AddRegistry(new ControllerRegistry());
        x.AddRegistry(new ActionFilterRegistry(
          () => Container ?? Infrastructure.ObjectFactory.Container));
        cfg.AddRegistry(new MvcRegistry());
        x.AddRegistry(new TaskRegistry());
        cfg.AddRegistry(new ModelMetadataRegistry());
    });
  }
}

In the global.asax, you can get the ObjectFactory.Container instance using:

var container = Infrastructure.ObjectFactory.Container;

The container variable can then be used to extend and configure additional dependencies and settings.

[nathan] that I found off of [Hacker News][hn] which apparently was published many years ago.
It was something to the effect that everyone should blog even if you don’t have any readers. It makes a great argument that writing
down your ideas makes them concrete (it adds a personal value) as well as making you a better reader as opposed to just keeping them in your head.

The concept is similar to telling or teaching someone what you just learned and recalling every bits of details to make the other person
understand. It forces your mind to think and makes a good challenge to yourself in finding out how much you really know about the subject.
I find that this is also true when it comes to presentations or writing a book (not that I’ve written one) but having to present those ideas
to an audience requires you to research and dig deeper to fill in the gaps.
The transfer of knowledge is valuable to the receiver (readers or listeners) as well the person telling the story.

I subscribe to tons of blogs and knowledgeable people in the web development industry, and read tons of articles on a daily basis. I also
buy digital books from sources such as [LeanPub][leanpub] from time to time just to read up on new concepts and ideas. I love the fact that
websites like [LeanPub][leanpub] allows any person to publish and share the ideas with the world in a much simpler process. There’s no
publisher that gets in the way of the publishing process — and the subscribers gets any updates instantaneously.

Anyone with a passion towards a subject should blog about it because it’s good for you and everyone!
In terms of knowledge about the subject, it’s been said that you don’t really own what you know until you can repeat it and say it out loud.
Having to write about what you know will make you realize how much you know and don’t know, plain and simple.

[nathan]: http://nathanmarz.com/blog/you-should-blog-even-if-you-have-no-readers.html
[hn]: https://news.ycombinator.com/
[leanpub]: https://leanpub.com/
[tumblr]: http://www.tumblr.com/
[wp]: http://wordpress.org/

I did a trial for [WebStorm 6][webstorm] a week and a half ago and immediately fell in love with it after playing with it for a couple days.
I purchased a license for it right away during the trial period. The main feature that sold me to the IDE was the fact that it has components
and workflow to work with modern JavaScript frameworks such as [jQuery][jQuery], [Modernizr][modernizr], etc. For instance, you can have several
JavaScript frameworks and have it reusable across multiple projects. By setting a library to “global”, it makes it available for
auto completion for all projects (meaning every project will automatically inherit it). Beforehand, I use [Notepad++][nplus] and
[Web Matrix 2][webmatrix] as an alternative to [Visual Studio][vs] for fast web development debugging and code writing. I never really cared much
about code completion (slows me down sometimes) but it helps for accessing third-party libraries. [WebStorm][webstorm] is a very light IDE but
yet still powerful — not to mention, it also supports multiple version control repositories such as Git, Mercurial, among many others.

[WebStorm 6][webstorm] has a little bit of [Resharper][resharper] flavor on it having used [Visual Studio][vs] for a few years now.
In addition, it’s also packed with many features to target modern web development workflows such as testing, support for
dynamic languages such as [LESS][less], [SASS][sass], [CoffeeScript][cs], [TypeScript][ts], etc. Similar to [Visual Studio][vs], there’s
predefined project templates when creating new projects (HTML5 boilerplates) as a starting point which generates boiler plate code that uses
HTML5 boilerplates project. With this idea in mind, it inspired me to look at [Yeoman][yo] once again and make it part of my web development
workflow.

[Yeoman][yo] is basically a scaffolding tool that works with [Grunt][grunt] and [Bower][bower] to complete a modern web development
work flow (ie brings in configuration to make your application work nicely with [Grunt][grunt] and [Bower][bower]). [Bower][bower] is
a dependency management tool similar to [Node Package Management or NPM][npm] which automatically pulls the library as well as its dependencies.

[Grunt][grunt] on the other hand is a test runner, build tool (generates a deployment folder which contains a production ready version of your
application), and a whole lot of automation types of tasks. [Grunt][grunt] and [Bower][bower] are powerful tools in their own rights and does
a lot more things that I mentioned above but [Yeoman][yo] sorta brings them together. Playing with these three tools for a few days,
I’ve come up with steps/commands to accomplish certain steps. I’ll provide a list of commands that I have put together meant as a sticky note
so I can reference them over and over again easily.

As a side note, the commands used are in Windows OS and command line (either the default command line or Node.js command prompt
works good). If you don’t have [NodeJS][node] installed on your system already, you can check out the website and need to have it installed. It will
include [NPM][npm] which is an indispensable tool and is very popular in the open source community.

#### To install the tools:
Install Yeoman.

\> npm install yo -g // The -g attribute installs the component globally

Install Bower.

\> npm install bower -g

Installing Grunt according to their documentation requires a few steps. When working with a new project, you can execute the following
commands.

\> npm uninstall grunt -g // Only if you have Grunt previously installed

\> npm install grunt-cli -g

For existing projects that already has package.json and Gruntfile.json (task configurations, plugins, etc):

\> npm install grunt –save-dev

#### Yeoman commands:
To scaffold a new generic app.

\> yo webapp

To list all the Yeoman generators.

\> npm search yeoman-generator

Once you find a generator that you want to install, you can install it via NPM by

\> npm install -g generator-angular // Angular JS for example.

\> npm install -g generator-backbone generator-backbone-amd // Backbone JS

You can then scaffold a new project using the generator. (You have to do CD to and be at the root of your project directory).

\> yo angular

#### Bower commands:
Bower is similar to NPM where it manages libraries and dependencies as packages that can be installed globally or per project
based.
To install a library such as angular.

\> bower install angular // Gets the latest version

To install a specific version of a library.

\> bower install angular\#1.1 // This will install a specific version

To install from a GitHub repo.

\> bower install git://github.com/components/jquery.git // Installs from GitHub

To update a package.

\> bower update angular

To list all installed packages.

\> bower list

To search the Bower repository for specific packages.

\> bower search [name]

#### Grunt commands:
Grunt can do a lot of things such as starting up a web server for testing, running test, executing tasks, etc.

To perform a project build and compiles a project for deployment (JS minification, CSS minification, etc).

\> grunt –force // The force parameter ignores warning and will continue the build.

To perform a test

\> grunt test

To run a server instance

\> grunt server

The commands above are the most common usage of the [Yeoman][yo], [Bower][bower] and [Grunt][grunt]. There’s a lot
more things that weren’t covered but for starters, this should be a good base line when trying to make these tools
part of your web development work flow.

[node]: http://nodejs.org/
[npm]: https://npmjs.org/
[yo]: http://yeoman.io/
[grunt]: http://gruntjs.com/
[bower]: https://github.com/bower/bower
[modernizr]: http://modernizr.com
[jQuery]: http://www.jquery.com
[webstorm]: http://www.jetbrains.com/webstorm/
[vs]: http://www.microsoft.com/visualstudio/
[resharper]: http://www.jetbrains.com/resharper/
[sass]: http://sass-lang.com/
[less]: http://lesscss.org/
[cs]: http://coffeescript.org/
[ts]: http://www.typescriptlang.org/
[backbone]: http://backbonejs.org/
[underscore]: http://underscorejs.org/
[webmatrix]: http://www.microsoft.com/web/webmatrix/
[nplus]: http://notepad-plus-plus.org/

Most of the code samples that you’ll find regarding bootstrapping Angular JS is to do it declarative such as `html ng-app=””`

You can name your app or leave it blank if you only have 1 app (I find that in most cases, 1 app is enough although you can have multiple controller if needed).
What if you want to get away from the declarative approach and do all the App bootstrapping in JavaScript? I find that the code above works but is not flexible
especially on large-scale applications where you have a “layout” template and you know ahead of time that you’re not going to use Angular JS on every single page.

If you want more flexibility when bootstrapping your Angular JS app, here’s a code sample to accomplish that. Take note though that there’s tons of other ways
to bootstrap an Angular JS app aside what I have here.

“`javascript
window.app = {};

/** Bootstrap on document load and define the document along with optional
modules as I have below.*/
angular.element(document).ready(function () {
// Attach to namespace, bootstrap document, and inject modules.
app.ang = angular.bootstrap(document, [‘ngResource’, ‘ngSanitize’]);

// OR simply, this works similarly.
angular.bootstrap(document, []);
});

/** Define Angular Controller */
app.myController= function ($scope, $resource, $timeout) {
….
};
“`

Hope this helps some lost soul.

The jQuery `$.when()` (part of the deferred object) function works great when using the AJAX function since it uses the deferred object by default.
What if you want to do something similar when performing asynchronous/synchronous operations by means of regular functions such as?

“`javascript
$.when(firstFunction(), secondFunction()).then(function() {
console.log(‘Execute when all functions has completed’);
});
“`

Given the scenario above, what if you have a function that takes a little bit longer than the other function such as.

“`javascript
/** Create a long operation */
var loadQueue = function() {
var dfrQueue = new $.Deferred();
var i = 0;
var loop = window.setInterval(function() {
++i;
console.log(‘queue 1 – running: ‘+i);
if (i >= 10) {
// pass optional param to success callback
dfrQueue.resolve(‘queue 1’);
clearInterval(loop);
}
}, 1000);

console.log(‘initialize test for queue 1’);
return dfrQueue.promise();
};
“`

“`javascript
/** Create a second long operation */
var loadQueue2 = function() {
var dfrQueue = new $.Deferred();
var i = 0;
var loop = window.setInterval(function() {
++i;
console.log(‘queue 2 – running: ‘+i);
if (i >= 5) {
// pass optional param to success callback
dfrQueue.resolve(‘queue 2’);
clearInterval(loop);
}
}, 1000);

console.log(‘initialize test for queue 2’);
return dfrQueue.promise();
};
“`

To keep things simple, the `loadQueue()` and `loadQueue2()` both perform some things behind the scenes asynchronously based on some process (could be anything that
takes a long time to process). The `setInterval()` function is used to simulate a process that takes 5 and 10 seconds which the `loadQueue` being the last one to
finish. In the end, both functions can be called and a callback function can be called when both functions has succeeded.

“`javascript
/** Using $.when() for 2 asynchronous or long running operations
but additional functions can be added as long as they all return
a promise object. */
$.when(
loadQueue(), loadQueue2())
.then(function(arg) {
// all operations has completed and console out the argument provided by the last operation that completed.
console.log(‘all process succeeded: ‘ + arg);
});
“`

This is based on an assumption that a callback is called once and after all functions has successfully completed.

The complete code can be found here. [Gist][1]

[1]: https://gist.github.com/menacestudio/5137516