Category

Dev

Category

In my previous post, I wrote about Sails.JS ORM which includes model definitions, joins, attributes and such. In this post, I will go over some simple and more advanced queries that will use the previous examples as a basis for our ORM queries. For a more comprehensive list of queries, check out the GitHub documentation for details.

Sails.js uses Waterline out of the box as its ORM (object relational mapper) and it’s basically an object-based way of querying the database. We created the models with a set of attributes, types, etc so it can be used as a 1:1 object representation of table/document in our database. The attributes are used to specify the name of the column/field, length and what type of data it can hold.

For our example, we will be using MongoDB as the database (each database implementation has its own unique of types and querying mechanism) but the ORM allows us to not care about the underlying the technology. Just take note that there might be some slight variance in how certain things are handled.

Basic queries

By default, querying is case-insensitive unless you use the query method. The main keys used to perform queries are where, skip, limit, and sort.
For example, you can use

User
    .find({ where: { name: 'Dennis' }, sort: 'name desc' });

Or use the short-hand

User
    .find({ name: 'Dennis'})
    .sort('name desc');

From the previous example, you noticed that you can perform method chainining which performs additional functions on previous result sets. If you wish to omit the key where, the alternative is to use the method chaining approach and that’s just a matter of preference.

Sort, limit, skip

In addition to where, additional query options can be used to in order to further filter down the result set and paginate.
For instance:

User
    .find({ limit: 10, skip 5 })
    .sort('name desc')

The query limits the returned recoords to 10, skips 5 records, as well as sort it by name. As an alternative, you can also write the same query using the "helper" method approach which can be written as follows.

User
    .find({ name: 'Dennis' })
    .paginate({ page: 1, limit: 10 })

It queries the "User" column/document that has the name of Dennis and only returns 10 records found on the 1st page.

Advanced queries

In this next section, we’ll go over some of the more advanced features when working against the database records.

Create, update, delete

Creating a new record is as easy as simply passing in the object and calling exec() to perform the insert. The exec takes in a callback that returns an error as the first parameter (if any) and the new record as the second parameter. Always check if an error is not a null and return something meaningful to the user or log if necessary.

Create
User
    .create({ name: 'Dennis', age: 34 })
    .exec(function(err, user){
        if (err) return res.negotiate(err); // or do something with it
    });

You can also use findOrCreate which performs a check if a record exists or not and performs a create if a record wasn’t found

User
    .findOrCreate({ name: 'Dennis' })
    .exec(function(err, user){
        // user will be either the existing or new record
    });
Update

Updating is a multi-process which involves looking up the document by its index and performing an update on the returned data. The example below uses the findOneById() which is a useful helper by Waterline to quickly find a single record by id.

User
    .findOneById(1234)
    .exec(function(err, user){
        user.name = 'Dennis';
        user.age = 34;

        user.save(function(err, savedUser){
        });
    });
Delete

Deleting or "destroying" a record is similar to find where you supply the object criteria of the record(s) that you want deleted. In addition, you need to also use exec to execute the delete and perform the operation. The exec accepts a callback that has 1 parameter on whether the operation succeeded.

User
    .destroy({ name: 'Dennis' })
    .exec(function(err){
        if (err) return res.negotiate(err);
    });
Populate

Another useful function that comes with Waterline is the populate method which basically includes additional objects that are associated with the queried document.

For example, let’s say that you have the model User.js and UserFollower.js that has the following definition.

// User.js
module.exports = {
    attributes: {
        name: {
            type: 'string'
        },
        followers: {
            collection: 'UserFollower',
            via: 'user'
        }
    }
}
// UserFollower.js
module.exports = {
    attributes: {
        // Reference to the user object
        user: {
            model: 'User'
        }
    }
}

The definition above for the UserFollower essentially includes the ObjectId which is a property that references the id for the user. What this means is that, when you query the User document, you have the option of including any of the other documents that is referenced which includes UserFollower.

If you need to include additional objects as part of the query, you can use populate() to include the object as part of the result set.

var user = User
            .findOne({ id: 1234})
            .populate('followers');

This concludes this post and covers 70-80% of the standard operations and queries that you will encounter in the wild. There’s tons of other Waterline functionalities so be sure to browse through the documentation and experiment with some of the ORM queries that I have mentioned. If you have any suggestions on which topic you’d like to see next (or perhaps a continuation) of this write-up, please feel free to comment or email me. Thanks for reading.

I have been working with Sails.JS for a little while now and would like to share a bit of information on how to use the Models and Waterline/ORM (Object Relational Mapper) when working with your database. An ORM is basically an API to access your database and perform queries against it. Waterline as an ORM has built-in adapters that allows you to use MS SQL Server, MongoDB, Redis among a host of others. It simplifies the process of working with a database since you’re using the same API regardless of what database you’re using (see Waterline on GitHub to see if there’s an adapter available for your database).

You can use Waterline in any flavor of Node.JS but in this post, I’ll be discussing Waterline in the context of Sails.JS although the implementation and concepts are pretty much the same. The adapter should not be relevant in most cases but certain databases have other functionalities that other databases doesn’t so this is something that you need to be aware of. A simple example of this would be if you’re using a relational database such as SQL Server that allows you to add precision or size to the column property (ie. nvarchar(128).

{
    name: { 
        type: 'string',
        size: 128
    }
}

To keep things simple, I’m also going to use from the NoSQL perspective as the context of the post although like I have mentioned, the examples will handle most cases and scenarios.

Models

A model essentially describes a table/collection in your database and makes up the definition of properties or attributes and what it can contain. Model usually maps to your database table or collection. By having defined models, you are adding constraints on what properties/fields that can be saved.

Each model consists of one or more attributes which can be of type string, int, etc. (see more below). A model is synonymous with Collection as each collection can contain many Models.

Globals

All models assuming that they’re stored (by convention) where they need to be, is accessible globally by default. A model such as User that is in /api/models/User.js can be accessed anywhere simply by doing User.findById(1). This Sails.JS feature can be configured to be turned off in the /config/globals.js.

Attributes

Here’s an extensive list of all the attributes that you can use but I’d like to go over some basics. The most common types that you’ll probably going to be using are string, datetime, email, boolean, integer, array and an object (I’ll discuss this below when I go over creating relationships).

Let’s say that we have a user registration system where we need to store some basic user information. You can define a User model such as:

model.exports = {
    name: {
        type: 'string',
        required: true
    },
    lastLoggedIn: {
        type: 'datetime',
        defaultsTo: new Date()
    },
    favoriteNumbers: {
        type: 'array'
    },
    email: {
        type: 'email',
        unique: true
    }
}

I have included a variety of options so you can see what is possible and give you a sense of what that can look like.

The name property is set to a type of "string" which allows all character types that is set to required. The lastLoggedIn is of a "datetime" and defaults to current date if not provided. The favoriteNumbers property is set to an array which basically allows you to pass in array. Most of the property types are very basic and determined by your applications requirements including defaults, etc.

The email property in this case is of type "email" and is set to unique (meaning an error will be thrown if you insert the same email twice). Being an "email" type validates that this the string passed in is in the form of an email address but under the covers, it’s still a "string" type.

If you’re working against a NoSQL database backend, setting the property as unique isn’t enough (the only thing that is unique is its id object, so it has to be configured in the /config/bootstrap.js to ensure that the database indeed does not allow duplicate email by (see code below).

// Bootstrap.JS
module.exports.bootstrap = function(cb) {
  User.native(function(err, collection) {
    collection.ensureIndex('email', {
      unique: true
    }, function(err, result) {
      if (err) {
        sails.log.error(err);
      }
    });
  });

  // Trigger the callback to finish up the bootstrap process
  cb();

  _.extend(sails.hooks.http.app.locals, sails.config.http.locals);
};

Next up, I’d like to go over the database relationships as this is essential in more complex applications.

1-to-Many Relationship

NoSQL databases in general are document based so you can essentially store all data that pertains to that collection in a single document (nested data). There’s also scenarios where having your data in separate documents might be a good idea depending on your application’s needs. Storing data in different documents can be a good idea if you do a lot of "writes" but if your application is mostly "reads" then it makes more sense to go with a single document approach.

Since we’re in discussion of relationships, we will work with an example that consists of multiple models/collections. I’ll go over an example of 1-to-many relationship then further extend the example into the many-to-many relationship type.

For example:
A model User has UserTypes associated with it.

You can build out your models such as:

// User.JS
module.exports = {
    identity: 'User',
    attributes: {
        name: {
            type: 'string'
        }
        userTypes: {
            collection: 'UserTypes',
            via: 'user'
        }
    }
}
// UserTypes.js
module.exports = {
    identity: 'UserTypes',
    attributes: {
        user: {
            model: 'User'
        },
        userType: {
            type: 'string'
        }
    }
}

If you look at the defined models, User has an identity which is an optional property that allows you to set the name of the model to other than the default. You can set "User" to "foo" if you’d like to better describe the model. The attribute of userTypes. userTypes is specified as a collection of UserTypes model and points to the UserTypes via the user property. This defines a 1-many relationship between the User and UserTypes model.

On the other hand, UserTypes model simply just have the property of user which maps back to the User model. Using the example model above, the querying can be illustrated in the next section for ORM queries and how we can leverage this relationship.

Many-to-Many Relationship

Using the same models as above, we can define a many-to-many relationship using the concept of "followers" as an example. Twitter has this concept of allowing a user to follow other users and vice versa, hence the many-to-many relationship. Let’s look at an example how that model relationship looks like when building the models.

// User.JS
module.exports = {
    identity: 'User',
    attributes: {
        name: {
            type: 'string'
        }
        userTypes: {
            collection: 'UserTypes',
            via: 'user'
        },

        // Reference to user followers
        userFollowers: {
            collection: 'UserFollower',
            via: 'user'
        },

        // Reference to users that the user is following
        userFollowing: {
            collection: 'UserFollower',
            via: 'follower'
        }
    }
}
// UserFollower.JS
module.exports = {
    identity: 'UserFollower',
    attributes: {
        user: {
            model: 'User'
        },

        follower: {
            model: 'User'
        }
    }
}

In the example above, I have extended the User model with 2 new properties namely userFollowers and userFollowing. The property definition are both set to the same collection UserFollower and uses the via object property that points to the UserFollower properties.

On the other hand, UserFollower have 2 properties as well that has a model specified as the User that points back to the User model.

In the next post (part 2), we will be exploring the ORM piece and re-use the examples in this post to perform basic and more advanced queries. Please let me know if you find this post useful. Feel free to comment as well for feedback and share it with others in your social media. Thanks for reading.

Thank you and keep shipping!

When I got first started with Node.JS about two years ago, I came across Sails.JS and instantly fell in love with it. Two years later, it is still my "go-to" web framework when building a Node.JS application. If you’re not familiar with Sails.JS, it provides a real-time MVC framework on top of Node.JS. It’s basically the ASP.NET MVC for .NET or Rails for Ruby. While it has some pretty nice features out of the box (ie real-time capabilities using websockets, asset pipeline, CLI tools, etc.), the main selling points for me were the built-in conventions and solid, well-thought architecture. For example, /api/Controllers/UserController.js corresponds to the end-point of /api/user that has views of /views/User/index.ejs and so forth. I’ll be discussing some of these nifty features in future posts.

Authorization options in Node.JS

Sails.JS also comes with built-in policies for controlling API permissions which is the authorization piece that goes hand-in-hand with the authentication. When it comes to authentication in Node.JS, Passport is probably the most common middleware option and offers tons of "strategies" for authentication via Facebook, Twitter, etc.

Coming into the Sails.JS world, there are other options as well including Sails-generate-auth which is an abstraction layer/ middleware that utilizes Passport for authentication. There’s also Sails-auth which is also Passport based as well. There’s plenty of options out there so it’s worth considering the pros and cons of each one.

MachinePacks

Another great option which I’ve discovered recently which I’ll cover in this post is using what’s called, the "machine-pack". It’s built around the concept of a machine, being that each machine has a well defined purpose and easy to implement (evident of its well written documentation). They also prescribe to a standardized interface which makes it ideal for easy consumption. A set of machine is what constitutes a "machine-pack" which basically combines multiple machines to perform common tasks. A common task can be something like, "authenticate with Facebook" or "send mail with MailGun" and so forth.

A machine-pack that I’d like to share in this post is called machinepack-Facebook which is a machine-pack for authenticating your web application with Facebook. If we explore the machinepack-Facebook bundle itself, it includes a few machines that are stored within the /machines directory which includes get-access-token.js, get-login-url.js, get-longterm-access-token.js and get-user-by-access-token.js.

The concepts in this post are very similar to authentication with Twitter. I’ve looked everywhere on the Internet for implementation specific to machine-packs and couldn’t find one so hopefully this will shed some light on its implementation.

Implementing the machinepack-Facebook

First and foremost, you will need to create an account at Facebook Developers and create an app to get a client/app key and the secret. Once you have those 2 pieces of information, you can create an API by using sails generate api Auth which generates the /api/Auth endpoint. This generates a Model and a Controller API.

Since we’re going to use the key and secret multiple times within our controller, I created /config/appsettings.js which is a config that is exported and available throughout the application. It keeps the code clean and configurable. At the top of the controller before the module.exports = { } is where all the declaration resides at.

The first line basically just requires the machinepack-facebook which will need to be installed via npm install machinepack-facebook --save. The rest are convenience variables to eliminate code repetition.

var Facebook = require('machinepack-facebook');

var callbackUrl = sails.config.appsettings.BASE_URL + '/auth/fbcallback',
    fbClientId = sails.config.appsettings.FACEBOOK_CLIENTID,
    fbSecret = sails.config.appsettings.FACEBOOK_SECRET;

The next chunk of code is for the API function which is the entry point to the authentication process. The 2nd line starts builds the login URL with callback along with the permissions that is being required by your application. The "permissions" below corresponds to a set of permission items that you’d like to access and it’s in an Array format.

I omitted the error callback/handling for simplicity and to make the concepts clear and to the point. I also suggest refactoring the entire Facebook authentication out as a separate service to keep the controller actions clean.

facebook: function (req, res) {
  Facebook.getLoginUrl({
    appId: fbClientId,
    callbackUrl: callbackUrl,
    permissions: [ 'public_profile' ]
  }).exec({
    error: function (err){ },

    success: function (result){
      return res.redirect(result);
    }
  });
},

In the success callback above, the result is a URL that was formed by the Facebook.getLoginUrl() function. The function below is the callback routine where this will go out to Facebooks’s Graph API and asks the user whether it will allow your application to authenticate to Facebook on your behalf. In addition, it also asks for permission(s) such as accessing your profile information, etc.

The getAccessToken() function requires a "code" which is sent back from the previous call to Facebook’s auth service. The "code" verifies that the user has permitted your app’s access request. getAccessToken’s callback then returns a token which can then be used to get the user’s information.

fbcallback: function(req, res){
  var code = req.params.all()['code'];

  Facebook.getAccessToken({
    appId: fbClientId,
    appSecret: fbSecret,
    code: code,
    callbackUrl: callbackUrl
  }).exec({
    error: function (err){ },
    success: function (result){
      var token = result.token;

      // Get information about the Facebook user with the specified access token.
      Facebook.getUserByAccessToken({
        accessToken: token
      }).exec({
        error: function (err){ },

        success: function (result){
          // Result will include the user's profile information for consumption.
        }

      });
    }
  });
},

In the end, you can then use the id or the email that was sent back by Facebook to lookup in your database on whether the user exists in which, you will need to create a user, then log the user in automatically.
Using Waterline (ORM in Sails.JS), you can do a query such as (this finds a user by either a facebookId or email which are both valid identifiers):

User.findOne({
  or: [
    { facebookId: result.id },
    { email: result.email }
  ]
}).exec(function(err, user){
  if (user){
    // Update the last logged in date/time stamp and log the user in
  } else {
    // Create a new user and log the user in    
  }
});

As for the routes, we will need to manually modify the /config/routes.js to tell it to point to the correct API endpoint and handle the type of request properly (GET, POST, PUT, DELETE).

'POST /auth/facebook': 'AuthController.facebook',
'POST /auth/fbcallback/:id': 'AuthController.facebookcallback'

I hope that this helps and and shows how easy it is to use the machine-pack with Sails.JS. Please let me know if you have any questions on the comment below.

I recently ran into an issue where I have a fairly nested directive and within the directive itself has an input that requires a decorator type directive such as for validation. As far as the title of the blog post, I figure that with each Angular 1.x release (2 in the near future) it’s probably best to tag these posts accordingly since each version introduces new syntax, etc and might not work if you’re still on the earlier version.

The problem

I’ll be using TypeScript for my example. For simplification, let’s say that you have a directive that validates an input. The directive is called, "validateInputInner" and we would like to use this in another directive.

return <ng.IDirective>{
    restrict: 'A',
    require: 'ngModel',
    link: link
};

function link($scope, $element, $attrs, ctrl) {
    var validateInput = (inputValue)=> {

        // some validation logic goes here...
        ctrl.$setValidity('validateInputInner', isValid);

        return inputValue;
    };

    ctrl.$parsers.unshift(validateInput);
    ctrl.$formatters.push(validateInput);

    // Observe attribute change
    attrs.$observe('validateInputInner', (comparisonModel)=> {
        return validateInput(ctrl.$viewValue);
    });
}

From a normal usage, the directive can be simply be used as:

<input type="text" validate-input-inner="{{vm.someModel}}" />

It gets a little more complex when you embed the same directive within another directive such as.

<test-component data-ng-model="vm.someModel" validate-input-inner="vm.secondaryModel"></test-component>

The second directive called testComponent will be using the previous directive as part of the component’s validation. The code for the testComponent is below. Please note the placement of the validate-input-inner.

Solution

We would like to use the testComponent directive as a wrapper component that exposes a scope property that feeds into the validation directive. We also could have re-used the existing model but for this example, we’ll assume that the validate-input-inner needs to validate another model in addition to the model.

testComponent.$inject = ['$compile'];
function testComponent(
    $compile: ng.ICompileService): ng.IDirective {
    return <ng.IDirective>{
        restrict: 'E',
        replace: true,
        require: 'ngModel',
        scope: {
            model: '=ngModel',
            validateInputInner: '=?',
        },
        link: link,
        template: `<div>
            <input type="text" class="form-control" 
            data-ng-model="model" validate-input-inner="{{validateInputInner}}"  /></div>`
    };
    // more code...

You might assume that this will work "as is" since the testComponent is using the same approach as it was by itself, and the scope property gets funneled down the directive itself.

Surprisingly enough, it doesn’t. The validate-input-inner works by itself but it but becomes unaware when inside a template based directive. The validate-input-inner could have been re-written another way perhaps to use $watch as opposed to $observe but given the scenario that we’re in, one way that I found to make it work is to use a $watch insite the testComponent itself.

We will need to inject $compile which is an Angular way to dynamically compile a string into a usable DOM element. In our case, the input element is nested within a div which is why need a reference to it using jqLite.

function link($scope, $element, $attrs) {
    var $input = $element.children('input');

    $scope.$watch('validateInputInner',(val) => {
        $input.attr('validate-input-inner', val);
        $compile($input)($scope);
    });
}

I added $scope.$watch and call $compile to refresh the validateInputInner‘s state. Please keep in mind that this operation is DOM intensive and is not always the best solution. This is one way out of (I’m sure) hundreds of ways of solving this. I’ll be exploring more ways to solve this and expand on this scenario in the future.

In the mean time, if you have suggestions or other ways to solve this, please feel free to comment or contact me.

Getting started on Ionic actually isn’t a straight-forward process. While the technology being used is basic HTML, CSS and JavaScript, the underlying process still requires Android and/or iOS. I will focus on the Android piece but most of the information can easily translate to IOS.

This post assumes that Node JS and NPM are installed in your system. I’m also using Windows 8 so your file path may vary.

To get started with development on Ionic, there’s a few things that you will need which is composed of 3 main parts. The first part would be Apache Cordova itself which is a framework that allows you to use HTML, CSS and JavaScript to build the application. The advantage is, it gives you a standardized API to build on multiple mobile platforms (Android, iOS, etc) using a single code base.

The second part is the installation, or at least having the proper environment and/or SDK that the Cordova can communicate with. This part takes the longest and where there’s headache involved if you’re not familiar with the process. The third part is the tooling which uses a combination of command line (to serve, deploy, emulate, etc.) as well as the actual IDE to write the code with.

This blog post is not meant as a exhaustive guide as there’s tons of information involved. I will focus on the 3 main parts of the installation and will try to be as focused and straightforward as possible.

1. Ionic framework setup

To install Ionic and Cordova globally using NPM.

npm install ionic cordova -g

2. Environment setup

If you are developing in Android like I am, you will have to install a few things. First and foremost, you will need the Java SE Development Kit. The next step is to install the Android SDK. I chose to use the Android Studio which includes the tools+ the SDK. Next is to install WinAnt which is a Windows installer for Apache Ant. Apache Ant is a Java command line tool that will be used to build the *.apk file for deployment to an actual Android device.

After the installation, the Android environment variable will need to be added to the system path for easy access. This can be set in the control panel > system > advanced system settings > environment variables > path.

C:\Users\<User>\AppData\Local\Android\sdk\platform-tools

You will also need to add the Android package to build with. This can be done through the Android SDK Manager and by selecting and downloading the versions.

Alternative: Ionic Box

I won’t go into the details of using Ionic Box but it’s an alternative. Ionic Box is a lightweight ready-made environment to avoid the hassle of configuring Java and Android SDK altogether in a Windows environment. It requires VirtualBox and Vagrant to simulate an environment for building with Ionic and Cordova. The VirtualBox is a tool to create quick virtual machine environments and in conjuntion uses VirtualBox for the VM itself.

After you have downloaded Vagrant from GitHub, you can use the command prompt to get into that directory then type the following command to run, download and setup the environment. This will install a Ubuntu VM and configure it within VirtualBox itself. (note that this might take some time at first as this will download a few dependencies to run the environment)

3. Tooling and Ionic commands

To create a new app. Template options are blank, tabs (default) and sidemenu

ionic start <app name> <optional template name>

To configure the platform for Android (ios if you’re building for iOS)

ionic platform add android

To change the URL on where to serve up the environment

ionic address

Basic Ionic commands available

To test and make sure that everything has been installed properly as far as communicating with the emulator is concern (adb = Android Debug Bridge), open the command prompt and type

adb

To build. This step is required prior to emulating or running on actual device. This creates the *.apk files.

ionic build android 

Testing

The command for spinning up an Ionic server instance

ionic serve    

In addition, if you want to launch a side-by-side iOS and Android browser emulation

ionic serve --lab

There’s a project called Ripple Emulator which allows you to emulate on different devices via Chrome, you can install it via NPM, then run it.

npm install -g ripple-emulator
ripple emulate --path platforms/android/assets/www

To emulate in the Android environment and launch the app.

ionic emulate android

To run on an actual device (it will fallback to running on emulation mode if a device is not detected).

ionic run android

Tools

If you’re using Visual Studio as your IDE, there’s Visual Studio tools for Apache Cordova which has some built-in tools for debugging, emulating, creating new mobile project, etc. I also discovered the Telerik AppBuilder last week which I personally haven’t tried yet. I will have to do a trial and see if I find it beneficial to quickly build an app.

Lastly, if all you care about is just building the app and is OK with debugging in the browser, all you need is an IDE like Sublime or Webstorm.

On my next post, I will focus on the actual development in Ionic. I hope that you find this helpful and informative. Feel free to contact me for any questions.

I’ve installed web applications on various IIS versions on different Windows platforms and I always find some task to be annoying. Here are some common issues and how to get it resolved. This post is specific to IIS 6 and Windows 8. I will keep this post up to date as possible as I encounter or think about them.

Problem 1

  • By default, if you’re using an App Pool that is set to ApplicationPoolIdentity as the process model identity, you will get an error if your connection string is set to IntegratedSecurity=true. This means that the authentication is tied to the local credentials.

    Solutions

    a.) Set it to false and configure/grant an actual user to connect to your local SQL Database instance. This can be configured in SQL Server Management Studio (SSMS).

    b.) Set the process model identity in the Application Pool, instead of built-in account to a custom account using your Windows credentials.

Problem 2

  • If you’re getting a 401 (unauthorized access) to static resources (CSS, JS, etc.), this means that the default account for IIS doesn’t have the permission to read these files.

    Solutions

    a.) You can go to IIS manager and select the website and go to the Authentication > Anonymous Authentication and make sure that it’s enabled (at a minimum, it needs to be enabled) AND set to a user that has permission. By default, it uses IUSR for permitting anonymous access.

    b.) You can also go to the website project itself in your local directory and add IUSR to the list of accounts that are permitted to read the website directory. Right click on the project and Properties > Security > Edit. By going with this approach, you can keep the anonymous authentication to the Application pool identity since the permission is given or set to the built-in user account.

Side notes

  • In Windows 8, the command aspnet_regiis -i doesn’t work anymore so if you don’t have ASP.NET 4.x installed, adding it can be accomplished by going to the Programs and Features > Turn Windows features on or off then look for ASP.NET 4.x. Feel free to refer to this article for more information about this.

Feel free to comment or offer some insight if you find this post valuable or have encountered issues outside of what I highlighted in this post.

If you’re using StructureMap 3.1.x for your IoC container for .NET, you might have encountered the message, "StructureMap.ObjectFactory is obsolete: 'ObjectFactory will be removed in a future 4.0 release of StructureMap. Favor the usage of Container class for future work'"

The old way of configuring the StructureMap dependency resolver was to do something like:

public IContainer Container
{
    get
    {
    return (IContainer)HttpContext.Current.Items["_Container"];
    }
    set
    {
    HttpContext.Current.Items["_Container"] = value;
    }
}

DependencyResolver.SetResolver(new SMDependencyResolver(() => Container ?? ObjectFactory.Container));

            ObjectFactory.Configure(cfg =>
            {
                cfg.AddRegistry(new StandardRegistry());
                cfg.AddRegistry(new ControllerRegistry());
                cfg.AddRegistry(new ActionFilterRegistry(() => Container ?? ObjectFactory.Container));
                cfg.AddRegistry(new MvcRegistry());
                cfg.AddRegistry(new TaskRegistry());
                cfg.AddRegistry(new ModelMetadataRegistry());
            });

The object factory then allows the registry of dependencies either in your global.asax or some static class instance within your solution.

In order to get rid of this message, one possible solution that I’ve found online was the pass in an instance of IContainer in the controller if you’re using ASP.NET MVC or Web API.

public class MyController
{
    public MyController(IContainer container)
    {    
    }
}

Perhaps a better (or cleaner approach) is to re-define a new ObjectFactory that a returns an IContainer static instance.

public static class ObjectFactory
{
  private static readonly Lazy<Container> _containerBuilder =
  new Lazy<Container>(defaultContainer, LazyThreadSafetyMode.ExecutionAndPublication);

  public static IContainer Container
  {
    get { return _containerBuilder.Value; }
  }

  private static Container defaultContainer()
  {
    return new Container(x =>
    {
        x.AddRegistry(new StandardRegistry());
        cfg.AddRegistry(new ControllerRegistry());
        x.AddRegistry(new ActionFilterRegistry(
          () => Container ?? Infrastructure.ObjectFactory.Container));
        cfg.AddRegistry(new MvcRegistry());
        x.AddRegistry(new TaskRegistry());
        cfg.AddRegistry(new ModelMetadataRegistry());
    });
  }
}

In the global.asax, you can get the ObjectFactory.Container instance using:

var container = Infrastructure.ObjectFactory.Container;

The container variable can then be used to extend and configure additional dependencies and settings.

I recently purchased a MacBook Air so I can work on projects when on the road. With the recent updates to ASP.NET and being available on all platforms (Windows, OSX, Linux), I wanted to bring the same development experience on the OSX (currently on Yosemite).

This post will provide a quick start on being able to write ASP.NET vNext apps on OSX.

KVM

I started by bringing in the KVM (K Version Manager) which is used to build and run ASP.NET provided on this link. KVM is used to manage and get the runtime (KRE – K Runtime Environment) for .NET.

The way to bring this in is to use Homebrew which is a package manager for OSX much like Chocolatey is for Windows.

Installing OmniSharp

The next step is to install Sublime Text 3 if you don’t have this app yet. Once you have the text editor, the final step is to install OmniSharp which is an open source cross platform project that brings in Intellisense and code completion to Sublime, Vim, Atom, etc. This gives you the ability to be productive much like being in the Visual Studio environment when developing in ASP.NET/C#.

Omnisharp brings in multiple projects with each specific to a text editor. One of which is Kulture which is a project for Sublime. I personally had an issue finding the package control plugin (cmd+shift+p) to install Kulture in Sublime highlighted here.

If the package control doesn’t come up, launch the Sublime console by typing ctrl+[back tick] then paste this in (credit to this site).

import urllib.request,os,hashlib; h = '7183a2d3e96f11eeadd761d777e62404' + 'e330c659d4bb41d3bdf022e94cab3cd0'; pf = 'Package Control.sublime-package'; ipp = sublime.installed_packages_path(); urllib.request.install_opener( urllib.request.build_opener( urllib.request.ProxyHandler()) ); by = urllib.request.urlopen( 'http://packagecontrol.io/' + pf.replace(' ', '%20')).read(); dh = hashlib.sha256(by).hexdigest(); print('Error validating download (got %s instead of %s), please try manual install' % (dh, h)) if dh != h else open(os.path.join( ipp, pf), 'wb' ).write(by) 

Creating an ASP.NET app

Lastly, we will bring in Yeoman using NPM (node package manager) which is a command line tool to quickly generate an application.

npm install -g yo

The next command is to bring in the node modules to generate the actual ASPNET app.

npm install -g generator-aspnet

After installing Yeoman, we can then create the app by:

yo aspnet

This basically initiates the app creation process and asks basic questions such as the name of the app and the type of web application that you’re building (ie MVC, Web Application, etc.)

Quick Sublime commands

To bring in the Sublime command palette, use (cmd+shift+p) then select Run K Commands or F5 to bring in the k and kpm stuff which are quick access to commands to run the server (k kestrel), restore packages (kpm restore), etc.

After generating the new project, the first thing is to restore packages which was mentioned above. After that finishes, to build the project (cmd+b) to make sure that there’s no error.

Finally, run the app using the k kestrel (OSX specific) command in the command pallette then browse localhost:5004 in your browser to view the ASP.NET application.

This is a review for Getting Started with Twitter Flight by Tom Hamshere by Packt Publishing. I’ve come across Twitter Flight last year in my newsfeed and heard of it in one of the conferences that I’ve gone to. The framework seems promising but I was already sold on the idea of Backbone.js back then. This is my first actual attempt to dive deeper into what Flight is about. I like that the book is only 130 pages which makes for a good introduction. Each chapter is also brief and roughly about 5 pages each.

Requirements

The book assumes that you have a decent knowledge in JavaScript, jQuery and maybe even Require.js since it’s used in the code examples throughout. Since the idea behind Flight is being a component or module based, Require.js is a perfect fit but by no means required.

Flight according to their website is:

Flight is a lightweight, component-based JavaScript framework that maps behavior to DOM nodes. Twitter uses it for their web applications.

The author used Require.js on all of his code samples but the only requirements are ES5-shim and jQuery.

Show me some code

Let’s say you have this piece of mark up.

<form id="form">
    <input type="text" id="name" name="name" />
    <input type="submit" id="save" name="save" value=Save />
</form>

To create a component and set some default attributes and events. I elected not to use Require.js for the sake of simplicity. In the component below, the this.defaultAttrs allows a component to store values which in this case, I’m storing the id of an element and some random text someText.

var aSimpleComponent = flight.component(function () {
    this.defaultAttrs({
        someText: 'Hello',
        nameElement: '#name'
    });

    this.onSubmit = function (e) {
        e.preventDefault();
        alert(this.attr.someText + ' ' + this.select('nameElement').val());
    };

    // create a hook after the component has been initialized
    this.after('initialize', function () {
        this.on('submit', this.onSubmit);
    });
});

We can then use the component and attach to an existing DOM element.

// Attaching the component to the DOM
aSimpleComponent.attachTo('#container');

Chapters 1-4

The author starts out with an introduction on Flight and how it differentiates with the likes of Angular.js, Backbone.js, and Ember.js. Having no experience with Flight, I thought that this was a perfect book to slowly introduce me to the framework. The book slowly dives into the concepts and it wasn’t until half way through it that I was able to see the "big picture" of building a simple app. These earlier chapters sells the idea of why you should Flight and the reasoning on why Twitter built it.

It then goes on to installation using Bower and Yeoman to scaffold a new Flight application in chapter 4. Installation using the command line is a quick way to pull down the dependencies and get started but isn’t necessary at all.

Chapters 5-7

These chapters begins by discussing what are components (which is the basic premise behind Flight). It then defines the 2 types of components which are UI and Data. They’re both similar except for their main conceptual responsibilities. A Data component is for data processing and to perform data requests, while a UI component is attached to the DOM and provides the interface, handles user interactions, and event handlers.

Chapters 9-10

I particularly like the chapter 8: event and naming where the author illustrated some fundamentals that covers conventions that anyone that works with JavaScript can benefit from. Chapter 9 covers mixins which allows multiple components to share functionalities. Mixins are basically functions that other "components" can utilize and share so you don’t have to keep rewriting the same piece of functionality over and over. This allows for code re-use.

Chapter 10 introduces templating and in the examples, Hogan.js was used to incorporate templates in the components. A component can create be template based or can be attached to an existing UI element.

Hogan.js is another open source project that was created by Twitter which is a reimplementation of Mustache that allows templates to be precompiled on the server side. The advantage of this is skip the compilation process when rendering which is an expensive process. Hogan like other templating libraries is pretty simple to use and usually involes compiling a template into a function which you can then inject data into.

Chapters 11-13

In a nutshell, these chapters covers performance, testing and general architecture when building your Flight application. There’s things in this chapter that are beneficial in general that anyone can benefit from (not just building Flight apps).

The main ideas here is to allow individual components to be testable and avoid components from instantiating other components to keep them decoupled. As with any framework, it is always a good idea to think about the individual components upfront including the architecture and how the different pieces ties in together.

Code inconsistencies

I found some mismatch or inconsistencies between the code and the author’s instruction. One example for instance is in chapter 10, in the section generating template objects from DOM. I’m not sure if the author changed the code later on, but there’s a few instances of this error.

To achieve this, the table row needs to be hidden by default, so it doesn’t show on first load.

<ul class="js-task-list">
    <li class="js-task-item hide"></li>
</ul>

For reference, I have included the table of contents below.

Table of Contents

  • Chapter 01: What is Flight?
  • Chapter 02: Advantages of Flight
  • Chapter 03: Flight in the Wild
  • Chapter 04: Building a Flight Application
  • Chapter 05: Components
  • Chapter 06: UI Components
  • Chapter 07: Data Components
  • Chapter 08: Event Naming
  • Chapter 09: Mixins
  • Chapter 10: Templating and Event Delegation
  • Chapter 11: Web Applications Performance
  • Chapter 12: Testing
  • Chapter 13: Complexities of Flight Architecture
  • Appendix: Flight API Reference

Conclusion

The book includes a Flight API Reference in the appendix section and provides boiler plate codes to easily get started either building a component, mixins, using advice, and event listeners.

I would recommend this to anyone who is interested in Flight and have a decent JavaScript experience on them. The book is short and keeps the material on point within each chapter. It’s a light reading so anyone can finish it in a day or two but in order for you understand the concepts fully, the best way is to try it out and type out the code. Also, check out other components that users has created that you can use for your Flight application.

I think once in a while, it’s great to keep an open mind and see how other frameworks operate, and perhaps learn a thing or two about their architecture. Flight is not as popular as the more heavyweight frameworks counterparts but I can see how this can be useful to any project that adheres to a component based architecture approach.

Flight is a promising framework which is a good alternative to Backbone.js using a different approach, being a component based. It’s hard to compare the two since Flight is very lightweight and tries to keep it simple and doesn’t have Models or Routes or other things that Backbone.js has. This would be for another post so I’ll end it here.

I thought about switching my blog from Octopress to Ghost. I like having control
over the content that I’m publishing since they’re just literally markdown files (*.md)
and has no database backend. What I dislike about Octopress is the publishing aspect where I
have to issue a rake deploy every time I publish content. I was actually able to manage and minimize the multi-steps into running one small script. But I still have to do one step which is unacceptable.

The deployment should be painless and automated, period.. The directory should be able to automatically detect/pick up any file changes (new or update post)
and issue a generate command to convert the markdown files into HTML. Since the files
are locally created (synced to Dropbox), I need a way to automate it. My solution is to let the Windows Task Scheduler manage and run it every 24 hours. I wrote a console app and
the code looks like something below.

The dir is basically pointing to the root of the Octopress blog that I want published. Having this in the app.config gives me the flexibility to switch directory if my Dropbox is in a different location. The /C tells the command prompt to terminate after execution, and the push.sh is a shell script that I wrote a while back that generates HTML files, commit to local Git and deploy to Heroku server all in one shot.

var process = new ProcessStartInfo
    {
        WorkingDirectory = ConfigurationManager.AppSettings["dir"],
        FileName = "cmd.exe",
        Arguments = @"/C push.sh",
        CreateNoWindow = false
    };

    var proceed = Process.Start(process);
    proceed.CloseMainWindow();
    proceed.Close();

    Environment.Exit(0);
}

If you’re curious, the push.sh file looks like:

#!/bin/sh
# push.sh : publish and commit with a single command
rake generate && rake deploy
git add .
git commit -am "'date'" && git push heroku master

The script above is a bash script (*.sh) so if you’re running it in a Windows environment, I found that installing Git and using Git Bash is the easiest way to run bash scripts.