NodeMCU ESP8266 to WordPress REST API

Posting JSON from NodeMCU using ESP8266 WIFI to a HTTPS REST API and stored in MariaDB (example API written as WordPress Plugin).

Hardware

ESP8266 NodeMCU Mini Development Board (Amazon @ 5 for $14.99)

ESP8266 NodeMCU Development Board (Amazon @ 3 for $16.39)

DHT 3-PIN Sensor

Wiring

Sensor Wiring: Using 3 wires, I used Red = 3.3V, Brown = Ground, Orange = D4 (GPIO2).

In the image above, you can see the connector closest to the “S” is Orange for Data, Middle is Red for Power, and last wire next to “” is Brown for Ground.

Development Environment

Arduino IDE v2.3.2

Preferences => Additional boards manager URLs => Enter URL: https://arduino.esp8266.com/stable/package_esp8266com_index.json

Tools => Board => Boards Manager => Search “esp8266” and install esp8266 by ESP8266 Community v3.1.2

Tools => Board => esp8266 => NodeMCU 1.0 (ESP-12E Module)

Add Library: Arduino_JSON v0.2.0

Add Library: DHT sensor library v1.4.6

ESP8266 NodeMCU requires the Silicon Labs CP210x Windows Drivers, that can be found here.

Once everything is setup, make sure your MCU shows up in the Arduino IDE by going to Tools => Port and choosing the COM port associated with your device (will be different for everybody). I didn’t need to change any other settings to communicate with the MCU via the Arduino IDE.

Arduino Program

/*
    HTTP REST API POST/GET over TLS (HTTPS)
*/

#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>
#include <WiFiClientSecure.h>
#include <Arduino_JSON.h>
#include "certs.h"
#include "DHT.h"

// DHT-11's Setup
#define DHTPIN 2
#define DHTTYPE DHT11
DHT dht(DHTPIN, DHTTYPE);

// ESP8266 WIFI Setup
const char* ssid = "WIFI-SSID";
const char* password = "WIFI-Password";
const char* serverName = "https://www.ENTER-RESTAPI.com/";

unsigned long lastTime = millis() - 60000;
// Timer set to 1 seconds (1000)
unsigned long timerDelay = (1000 * 60 * 5);

X509List cert(cert_ISRG_Root_X1);

void setup() {
  dht.begin();

  Serial.begin(115200);

  WiFi.begin(ssid, password);
  Serial.println("Connecting");
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
  Serial.println("");
  Serial.print("Connected to WiFi network with IP Address: ");
  Serial.println(WiFi.localIP());

  // Set time via NTP, as required for x.509 validation
  configTime(3 * 3600, 0, "pool.ntp.org", "time.nist.gov");

  Serial.print("Waiting for NTP time sync: ");
  time_t now = time(nullptr);
  while (now < 8 * 3600 * 2) {
    delay(500);
    Serial.print(".");
    now = time(nullptr);
  }
  Serial.println("");
  struct tm timeinfo;
  gmtime_r(&now, &timeinfo);
  Serial.print("Current time: ");
  Serial.print(asctime(&timeinfo));

  Serial.print("Program posts to REST API Every ");
  Serial.print(timerDelay/1000);
  Serial.println(" seconds.");
}

void loop() {
  // Send an HTTP POST request depending on timerDelay
  if ((millis() - lastTime) > timerDelay) {

    // ========================================================
    // READ DHT Sensor Values Humidity & Temp in C
    // ========================================================

    // Read humidity.
    float h = dht.readHumidity();
    float humidNow = truncf(h * 10) / 10;

    // Read temperature as Celsisu (the default)
    float t = dht.readTemperature();
    // Convert temperature to Fahrenheit
    float tempNow = (t * 1.8) + 32;

    // Print Humidity & Temperature in Fahrenheit to Serial Monitor
    Serial.print("Humidity: ");
    Serial.print(humidNow);
    Serial.print("% Temperature: ");
    Serial.print(tempNow);
    Serial.println("°F");

    //Check WiFi connection status
    if (WiFi.status() == WL_CONNECTED) {

      // ========================================================
      // GET Request for DB Values
      // ========================================================
      // String sensorReadings = httpGETRequest(serverName);
      // JSONVar myArray = JSON.parse(sensorReadings);

      // if (JSON.typeof(myArray) == "undefined") {
      //   Serial.println("Parsing input failed!");
      //   return;
      // }

      // Serial.print("JSON object = ");
      // Serial.println(sensorReadings);

      // for (int i = 0; i < myArray.length(); i++) {
      //   JSONVar value = myArray[i];
      //   Serial.print(i);
      //   Serial.print(" = ");
      //   Serial.println(value["sensorValue"]);
      // }

      // ========================================================
      // POST Sensor Value to the DB via REST API
      // ========================================================
      JSONVar myObject;
      myObject["sendorValue"] = WiFi.localIP().toString();
      myObject["sensorType"] = "DHT";
      myObject["sensorValue"] = std::round(tempNow * 100.0) / 100.0;

      String jsonString = JSON.stringify(myObject);
      String sensorWriting = httpPOSTRequest(serverName, jsonString);
      Serial.println("==============================================================================");

    } else {
      Serial.println("WiFi Disconnected");
    }
    lastTime = millis();
  }
}

// POST JSON Data to REST API
String httpPOSTRequest(const char* serverName, String httpRequestData) {
  WiFiClientSecure client;
  HTTPClient http;

  client.setTrustAnchors(&cert);

  // Enable if your having certificate issues
  //client.setInsecure();

  Serial.println("Secure POST Request to: " + String(serverName));
  Serial.println("Payload: " + httpRequestData);

  http.begin(client, serverName);
  http.addHeader("Authorization", "KEY-TO-ACCESS");
  http.addHeader("Content-Type", "application/json");

  int httpResponseCode = http.POST(httpRequestData);

  String payload = "{}";

  if (httpResponseCode > 0) {
    Serial.print("HTTP Response code: ");
    Serial.println(httpResponseCode);
    payload = http.getString();
  } else {
    Serial.print("Error code: ");
    Serial.println(httpResponseCode);
  }

  Serial.println();

  http.end();

  return payload;
}

// GET Request to REST API
String httpGETRequest(const char* serverName) {
  WiFiClientSecure client;
  HTTPClient http;

  client.setTrustAnchors(&cert);

  // Enable if your having certificate issues
  //client.setInsecure();

  Serial.println("Secure GET Request to: " + String(serverName));

  http.begin(client, serverName);
  http.addHeader("Authorization", "KEY-TO-ACCESS");

  int httpResponseCode = http.GET();

  String payload = "{}";

  if (httpResponseCode > 0) {
    Serial.print("HTTP Response code: ");
    Serial.println(httpResponseCode);
    payload = http.getString();
  } else {
    Serial.print("Error code: ");
    Serial.println(httpResponseCode);
  }

  Serial.println();

  http.end();

  return payload;
}

REST Endpoint (Custom WordPress Plugin)

<?php
/**
 * Plugin Name: WP Arduino API Gateway
 * Version: 0.0.2
 * File Name: \wp-content\plugins\wp-arduino\wp-arduino.php
 * Description: REST Plugin to listen for NodeMCU GET/POST Requests
 * Text Domain: wp-arduino
 */

// Exit if accessed directly.
if (!defined('ABSPATH'))
    exit;

register_activation_hook(__FILE__, 'wp_arduino_setup_table');
function wp_arduino_setup_table()
{
    global $wpdb;
    $table_name = $wpdb->prefix . 'arduino';

    $sql = "CREATE TABLE $table_name (
        id mediumint(9) NOT NULL AUTO_INCREMENT,
        sensorType varchar (100) NOT NULL,
        sendorValue varchar (100) NOT NULL,
        createdOn datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
        PRIMARY KEY (id)
    )";

    require_once ABSPATH . 'wp-admin/includes/upgrade.php';
    dbDelta($sql);

}

add_action('rest_api_init', 'wp_adruino_register_routes');
function wp_adruino_register_routes()
{
    /* GET */
    register_rest_route(
        'wp-arduino-api/v1',
        '/arduino/',
        array(
            'methods' => 'GET',
            'callback' => 'wp_arduino_get',
            'permission_callback' => 'application_password_auth_validation'
        )
    );
    /* POST */
    register_rest_route(
        'wp-arduino-api/v1',
        '/arduino/',
        array(
            'methods' => 'POST',
            'callback' => 'wp_arduino_post',
            'permission_callback' => 'application_password_auth_validation'
        )
    );
}

/* Security Auth Check */
function application_password_auth_validation()
{
    //return current_user_can( 'contributor' );
    return true;
}

/* GET */
function wp_arduino_get()
{
    //Get HTTP request headers 
    $auth = apache_request_headers();

    //Get only Authorization header
    $valid = $auth['Authorization'];

    // Validate
    if ($valid == 'KEY-TO-ACCESS') {
        global $wpdb;
        $table_name = $wpdb->prefix . 'arduino';

        $response = $wpdb->get_results("SELECT * FROM $table_name ORDER BY id DESC LIMIT 20;");
    } else {
        $response = 'Please use a valid authentication';
    }

    return $response;
}

/* POST */
function wp_arduino_post($request)
{
    //Get HTTP request headers 
    $auth = apache_request_headers();

    //Get only Authorization header
    $valid = $auth['Authorization'];

    // Validate
    if ($valid == 'KEY-TO-ACCESS') {
        global $wpdb;
        $table_name = $wpdb->prefix . 'arduino';

        $rows = $wpdb->insert(
            $table_name,
            array(
                'sendorValue' => $request['sendorValue'],
                'sensorType' => $request['sensorType'],
                'sensorValue' => $request['sensorValue']
            )
        );

        $response = $rows;
    } else {
        $response = 'Please use a valid authentication';
    }

    return $response;
}

Once you save the file to the location in the file header on your WordPress instance (you can copy via FTP, or upload via WordPress Plugin screen), you’ll be ready to “Activate” your plugin. This will create a table in your WordPress site called wp_arduino and start listening for requests to save data from your NodeMCU. You will need to generate a certificate chain for your website, I used the python script below to do it.

The code below is for a modified version of cert.py, as the version that comes with the esp8266 doesn’t use UTC time and it throws errors when you try to generate your certificate file. You should save the code below into a file named “certs.py” into your Arduino project folder and run it with the syntax in the image to generate your “certs.h” file needed for HTTP over TLS (HTTPS).

#!/usr/bin/env python3

# Script to download/update certificates and public keys
# and generate compilable source files for c++/Arduino.
# released to public domain

import urllib.request
import re
import ssl
import sys
import socket
import argparse
import datetime

from datetime import datetime, timezone
from cryptography import x509
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.serialization import pkcs7
from cryptography.hazmat.primitives.serialization import Encoding
from cryptography.hazmat.primitives.serialization import PublicFormat

def printData(data, showPub = True):
    try:
        xcert = x509.load_der_x509_certificate(data)
    except:
        try:
            xcert = x509.load_pem_x509_certificate(data)
        except:
            try:
                xcert = pkcs7.load_der_pkcs7_certificates(data)
            except:
                xcert = pkcs7.load_pem_pkcs7_certificates(data)
            if len(xcert) > 1:
                print('// Warning: TODO: pkcs7 has {} entries'.format(len(xcert)))
            xcert = xcert[0]

    cn = ''
    for dn in xcert.subject.rfc4514_string().split(','):
        keyval = dn.split('=')
        if keyval[0] == 'CN':
            cn += keyval[1]
    name = re.sub('[^a-zA-Z0-9_]', '_', cn)
    print('// CN: {} => name: {}'.format(cn, name))

    print('// not valid before:', xcert.not_valid_before_utc)
    print('// not valid after: ', xcert.not_valid_after_utc)

    if showPub:

        fingerprint = xcert.fingerprint(hashes.SHA1()).hex(':')
        print('const char fingerprint_{} [] PROGMEM = "{}";'.format(name, fingerprint))

        pem = xcert.public_key().public_bytes(Encoding.PEM, PublicFormat.SubjectPublicKeyInfo).decode('utf-8')
        print('const char pubkey_{} [] PROGMEM = R"PUBKEY('.format(name))
        print(pem + ')PUBKEY";')
    
    else:

        cert = xcert.public_bytes(Encoding.PEM).decode('utf-8')
        print('const char cert_{} [] PROGMEM = R"CERT('.format(name))
        print(cert + ')CERT";')

    cas = []
    for ext in xcert.extensions:
        if ext.oid == x509.ObjectIdentifier("1.3.6.1.5.5.7.1.1"):
            for desc in ext.value:
                if desc.access_method == x509.oid.AuthorityInformationAccessOID.CA_ISSUERS:
                    cas.append(desc.access_location.value)
    for ca in cas:
        with urllib.request.urlopen(ca) as crt:
            print()
            print('// ' + ca)
            printData(crt.read(), False)
        print()

def get_certificate(hostname, port, name):
    context = ssl.create_default_context()
    context.check_hostname = False
    context.verify_mode = ssl.CERT_NONE
    with socket.create_connection((hostname, port)) as sock:
        with context.wrap_socket(sock, server_hostname=hostname) as ssock:
            print('////////////////////////////////////////////////////////////')
            print('// certificate chain for {}:{}'.format(hostname, port))
            print()
            if name:
                print('const char* {}_host = "{}";'.format(name, hostname));
                print('const uint16_t {}_port = {};'.format(name, port));
                print()
            printData(ssock.getpeercert(binary_form=True))
            print('// end of certificate chain for {}:{}'.format(hostname, port))
            print('////////////////////////////////////////////////////////////')
            print()
            return 0

def main():
    parser = argparse.ArgumentParser(description='download certificate chain and public keys under a C++/Arduino compilable form')
    parser.add_argument('-s', '--server', action='store', required=True, help='TLS server dns name')
    parser.add_argument('-p', '--port', action='store', required=False, help='TLS server port')
    parser.add_argument('-n', '--name', action='store', required=False, help='variable name')
    port = 443
    args = parser.parse_args()
    server = args.server
    port = 443
    try:
        split = server.split(':')
        server = split[0]
        port = int(split[1])
    except:
        pass
    try:
        port = int(args.port)
    except:
        pass

    print()
    print('// this file is autogenerated - any modification will be overwritten')
    print('// unused symbols will not be linked in the final binary')
    print('// generated on {}'.format(datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")))
    print('// by {}'.format(sys.argv))
    print()
    print('#pragma once')
    print()
    return get_certificate(server, port, args.name)

if __name__ == '__main__':
    sys.exit(main())

The final results should be shown as below, via the serial monitor in the Arduino IDE when uploaded/running.

I created a simple Admin Menu Item (Arduino Results) and have it display the last 20 entries that were uploaded into the DB, since dates were loaded as UTC you need to convert to your local timezone.

<?php
// /wp-content/plugins/ct-wp-admin-form/ct-wp-admin-form.php
/**
 * Plugin Name:     WP Arduino API Gateway Admin
 * Version:         1.0.0
 * Text Domain:     wp-arduino
 */

function wp_admin()
{
  wp_register_style('wp_admin', plugins_url('wp-arduino.css', __FILE__));
  wp_enqueue_style('wp_admin');
  wp_register_script('wp_admin', plugins_url('wp-arduino.js', __FILE__));
  wp_enqueue_script('wp_admin');
}

add_action('admin_init', 'wp_admin');

// this function generates the results table
function wp_arduino_table()
{

  global $wpdb;
  $table_name = $wpdb->prefix . 'arduino';
  $results = $wpdb->get_results("SELECT sendorValue, sensorType, sensorValue, DATE_FORMAT(CONVERT_TZ(createdOn,'+00:00','-07:00'),'%Y-%m-%d %l:%i%p') as createdOn FROM $table_name ORDER BY id DESC LIMIT 20;");

  $content = '<div class="tbl-header">';
  $content .= '  <table cellpadding="0">';
  $content .= '    <thead></tr><th>Sendor</th><th>Sensor</th><th>Value</th><th>Recorded</th></tr></thead>';
  $content .= '  </table>';
  $content .= '</div>';
  $content .= '<div class="tbl-content">';
  $content .= '  <table cellpadding="0" cellspacing="0" border="0">';
  $content .= '    <tbody>';
  foreach ($results as $row) 
  {
    $content .= '      <tr>';
    $content .= '        <td>' . $row->sendorValue . '</td>';
    $content .= '        <td>' . $row->sensorType . '</td>';
    $content .= '        <td>' . $row->sensorValue . '</td>';
    $content .= '        <td>' . $row->createdOn . '</td>';
    $content .= '      </tr>';
  }
  $content .= '    </tbody>';
  $content .= '  </table>';
  $content .= '</div>';

  // return the table
  return $content;
}

//Add admin page to the menu
add_action('admin_menu', 'add_admin_page');
function add_admin_page()
{
  // add top level menu page
  add_menu_page(
    'Arduino Results', //Page Title
    'Arduino Results', //Menu Title
    'manage_options', //Capability
    'wp-arduino', //Page slug
    'admin_page_html' //Callback to print html
  );
}

//Admin page html callback
//Print out html for admin page
function admin_page_html()
{
  // check user capabilities
  if (!current_user_can('manage_options')) {
    return;
  }
  ?>

  <!-- Our admin page content should all be inside .wrap -->
  <div class="wrap">
    <!-- Print the page title -->
    <h1>
      <?php echo esc_html(get_admin_page_title()); ?>
    </h1>
    <?php echo wp_arduino_table(); ?>
  </div>

  <?php
}

Tags: , , , , ,

Storage Processor Failover with a EMC Celerra NX4 / Clarion AX4

We’ll yesterday was the first time in many years that I had to spend an entire day in the office on a weekend due to a major hardware outage. This is never a good thing and it really *feels* bad when your team fails to keep core operations running smoothly. As a IT Manager, I strive for 0% unscheduled downtime and 95% up-time overall (Yep, that 5% might sound high but due to the volume of updates, backups, patches, hardware migrations, performance improvement tasks, etc… Thing need to go down for schedule work).

** I think Benjamin Franklin’s quote about a Apple a day, can be adapted to “A update a day, keeps another hacker away!”.

As for my data unavailable (DU) outage, the core symptom was our SQL Server 2008 host connect via fiber HBA to the SAN, loosing all of it’s LUNS. When my DBA saw this, she was on the phone with me in minutes yelling that all of her table had disappeared from every database in the system.  She said that the system went down at 1:45 AM and all of our backups failed and all of our databases were empty except for the system tables.  I immediately hopped online and realized this symptom was caused by SQL Server showing unmounted databases (why not say offline, unavailable), I’ll leave that question to Microsoft to explain…. but it definitely left her in a panic seeing all the data missing from every database on the server.  Since I’m familiar with the issue (I think I’ve caused or fixed a few thousand issues, over my past 10 years of managing SQL Server), I packed the car and headed into the office.

Thinking we had DR built into our very expensive EMC SAN, I was going crazy trying to figure out why we were not online. I put in a severity 1 request with EMC telling them that we had DU and to get somebody on the phone ASAP.  I grabbed a SP collect and sent it off to EMC and tech support person 1/7 found a bad fiber port on storage processor A (SP A – SFP FE 0) and said this was my issue.  Since I know we have 2 storage processors, I asked him about moving the resource to B, but he said he only knew the Celerra part of our system and declined helping saying I would need to work with the Clarion tech who would be dispatched to replace SFP FE0 in 4 hours, and be on-site with parts within 24 hours. I asked multiple times if he could directly transfer me to somebody on the Clarion team to get my system online and data available, but he kept saying NO.  I  finally gave up on the call, so I could log some NASTY messages in the call ticket to get somebody moving ASAP (This works very well with EMC, so if you can’t get the online person to solve your problem then post all your issues, comments, and concerns in the ticketing system).

Into the server room I went, one of the few times I wish the room was a bit warmer than 67 degrees… I manually moved the host fiber cable connected to the HBA over to storage processor B and saw my LUNs show up on the HOST in seconds, but they were all showing as unrecognized and prompting for initialization. Knowing there was something wrong, I spent about a hour looking in the NaviSpehere, UniSphere, and CLI (both GUIs were new to me because 38 days prior EMC did a major update to our system). After getting half-way there, I decided to be brave to re-seat storage processor A.  This ended up restoring data, but it only last for ~45 minutes before going offline again.  At this point, the Clarion tech called and said he be in site in 1 hour, at 2PM with parts. Upon his arrival with a new SFP FE0 port, I asked him to first help me get online with storage processor B so we could get data available while working on storage processor A. He also didn’t know the new interface and we proceeded to go through 5 Clarion tech support people, 2 shift changes, and 4 more hours before ending up with somebody who knew what to do.  As I had joked to the on-site tech a few times about it probably being 1 button or switch to set, the online tech showed us a single magic button to fix the problem!  ARGH, 15 hours of downtime and 7 tech support people from EMC kept my data unavailable.   The issue with the fail-over to storage processor B, was permission with the virtual disk being assigned only to storage processor A, so we moved it from A -> B and I was finally online in seconds.  Since this guy seemed to KNOW the system pretty well, I asked why automatic DR fail-over did not work and why manually assigning a virtual disk to a node was needed. He realized the question meant there was else wrong, since he too was confused why this whole permission change was needed in the first place. Another hour of research from the online tech, and he determined I was missing EMC PowerPath software on all my hosts connected to the SAN via iSCSI and Fiber.

At this point, totally confused I had to get a 30 minutes lesson on PowerPath and how it works and where it was supposed to be installed.  Finally, after hashing out the whole situation to determine the root cause with the; original designer, past employee who lead the project for me, my current team, Celerra support and the on-site Clarion support it was determined that the automatic fail-over features of the SAN were never setup correctly and we have never had an outage to trigger this scenario. This was a *shock* to everybody involved, but it explains a few mysterious events that have happened over the past 6 months (cluster resources becoming unavailable, complete SAN failure during upgrade, high IO on individual storage processors, etc…) that EMC support could not explain.

So, the end result and lesson learned is to always install PowerPath on all non-clustered hosts that connect to your EMC SAN if you want automatic fail-over.  If you do have this setup and have a storage processor fail, follow the following steps:

1. Connect to Navisphere
2. Click “Virtual Disks”
3. Choose your virtual to edit and click the name.
4. Click “Virtual Disk Access Details”

Navisphere-Express

5. Click “Change Ports for Access”
** On this screen, it will show where the virtual disk is currently assigned (e.g. SP A  or SP B).

That’s it, hopefully this saves somebody 15 hours of downtime confused tech support people and/or a mis-configured DR setup!

Save IMAP Attachments with AE.Net.Mail in C#

One of the people in my team ran into a problem yesterday, where a customer could only send data via Email. The customer does not support formal EDI or have IT staffers who can support building or using something more automated. After talking about how the processing needed to be performed 7 days a week, we knew we had to build something automated. I went to my Google and StackOverflow to search for a free IMAP solution (I so wish IMAP/POP/SFTP were supported in the core framework) and found AE.Net.Mail. The product doesn’t have any documentation, but after looking at 2 posts about issues other people were having with with the client, I was able to figure out everything I needed to process my emails and save the attachments out to disk. The project is even support via NuGet, which is a big plus. All of the code below is based on using AE.Net.Mail v1.6.0.0 with .NET 4.0.

Here is the code I built to process email attachments from 2 different customers:

// Connect to Exchange 2007 IMAP server (locally), without SSL.
using (ImapClient ic = new ImapClient("<server address>", "<user account>", "<user password>", ImapClient.AuthMethods.Login, 143, false))
{
	// Open a mailbox, case-insensitive
	ic.SelectMailbox("INBOX");

	// Get messages based on the flag "Undeleted()".
	Lazy<MailMessage>[] messages = ic.SearchMessages(SearchCondition.Undeleted(), false);

	// Process each message
	foreach (Lazy<MailMessage> message in messages)
	{
		MailMessage m = message.Value;

		string sender = m.From.Address;
		string fileName = string.Empty;

		// Rules by Sender
		switch (sender)
		{
			case "zachary@customerA.com":
				fileName = ExtractString(m.Subject, "(", ")");

				foreach (Attachment attachment in m.Attachments)
				{
					attachment.Save(@"C:\Demo\" + fileName + Path.GetExtension(attachment.Filename));
				}
				break;

			case "hunter@customerB.com":

				foreach (Attachment attachment in m.Attachments)
				{
					string filePrefix = attachment.Filename.Substring(0, attachment.Filename.IndexOf('_'));

					switch (filePrefix)
					{
						case "DAILY":
							fileName = "CustomerB_Inventory";
							break;
						case "PULL":
							fileName = "CustomerB_Pulls";
							break;
						case "RECEIPT":
							fileName = "CustomerB_Receipts";
							break;
					}

					attachment.Save(@"C:\Demo\" + fileName + Path.GetExtension(attachment.Filename));
				}
				break;
		}

		// Delete Message (so it drops out of criteria and won't be double processed)
		ic.DeleteMessage(m);
	}
}


/// <summary>
/// Helper method to pull out a string between a start and stop string.
/// Example:
///    string story = "The boy and the cat ran to the store.";
///    ExtractString(story, "cat", "to"); //returns "ran"
/// </summary>
/// <param name="stringToParse">String to search</param>
/// <param name="startTag">Starting String Pattern</param>
/// <param name="endTag">Ending String Pattern</param>
/// <returns>String found between the Starting and Ending Pattern.</returns>
static string ExtractString(string stringToParse, string startTag, string endTag)
{
	int startIndex = stringToParse.IndexOf(startTag) + startTag.Length;
	int endIndex = stringToParse.IndexOf(endTag, startIndex);
	return stringToParse.Substring(startIndex, endIndex - startIndex);
}

Most of the processing is done in the switch statement, since I’m going to apply different parsing rules by customer. This is far from a long term elegant and maintainable longterm solution, but it gets the problem solved quick. Here is the rule summary by customer.

case “zachary@customerA.com”:

Everyday I’m going to get 3 emails from this customer, the subject for each email will be:

“Daily Reports for Customer A (CustomerA_Inventory)”
“Daily Reports for Customer A (CustomerA_Receipts)”
“Daily Reports for Customer A (CustomerA_Pulls)”

All three files have a CSV attachment that is called CustomerA.csv. The file in each email contains different data and I need all 3 written to disk for processing in my EDI solution. My solution was to parses the subject using the helper method, and renames each attachment with the name matching in the (parenthesis). The result of processing the emails with the code above, is 3 files written to disk with names that correctly identify their data content.

C:\Demo\CustomerA_Inventory.CSV
C:\Demo\CustomerA_Receipts.CSV
C:\Demo\CustomerA_Pulls.CSV

case “hunter@customerB.com”:

This customer is similar to the top, but I’m only going to get 1 email with 3 attachments that each have a date appended to the file. I need to save each file to disk without a date stamp, so my EDI processor can be setup to pull in the same “file name” everyday. My solution was to parse the attachment file name and rename the file based on the prefix of the attachment name. The results of processing the email is below.

Source Email:

Attachment 1: DAILY_INVENTORY_06_05_2012.XLS
Attachment 2: DAILY_RECEIPT_06_05_2012.XLS
Attachment 3: DAILY_PULL_06_05_2012.XLS

Results of Processing

C:\Demo\CustomerB_Inventory.XLS
C:\Demo\CustomerB_Pulls.XLS
C:\Demo\CustomerB_Receipts.XLS

Note: Awhile back I looked to see if there was a turn-key app/solution that could pull in emails for processing (basically run as a service on a server, and provide a nice GUI to allow non-technical people to setup rules for email processing), but I couldn’t find anything that worked with our setup. This would ideally be a better solution, since it would allow users to add new rules for new emails at a moments notice, versus my hard-code logic used above.

ASP.NET 4.0 WCF RESTful Services – Part 2

In the first part, we learned how to setup a ASP.NET 4.0 WCF (file-extension less) service and the basics on using jQuery’s $.getJSON() method to call a service implementing the GET method. Now, we are going to review all the HTTP methods commonly found in a RESTful api (GET, POST, PUT, and DELETE). The code below targets the project created in step 1, which I’ll update in my next post. Each HTTP methods/verb is used to describe a process in a CRUD transaction, so I tried to keep things simple by naming accordingly. Everything below was written in Visual Studio 2010 with .NET 4.0.

RESTful HTTP Methods ( GET / POST / PUT / DELETE )

Before we explore the HTTP methods, remember the following rules while building your WCF Services.

The UriTemplate can only use parameters of type string

Uri Templates are very flexible and you can build your own standard syntax for your api.
Sample Uri Templates for .svc-less function calls

[WebGet(UriTemplate = "Products/{name}")]
public Product GetProduct(string name)
// URL = "/InventoryService/Products/ProductXYZ"
[WebGet(UriTemplate = "Products/Product({name})")]
public Product GetProduct(string name)
// URL = "/InventoryService/Products/Product(ProductXYZ)"
[WebGet(UriTemplate = "Products/API.Service?Product={name}")]
public Product GetProduct(string name)
// URL = "/InventoryService/Products/API.Service?Product=ProductXYZ"

When passing multiple objects to a service, you must wrap the objects.
** Difference sending 1 object vs 2 objects **

//  Sending 1 Object (No Wrapping).
[WebInvoke(UriTemplate = "Products/API.Service?Product={productName}", Method = "POST")]
public string APIGet(string productName, Product product1)
// Sending 2 Objects (Wrapping)
// Added "BodyStyle = WebMessageBodyStyle.WrappedRequest".
[WebInvoke(UriTemplate = "Products/API.Service?Product={productName}", Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest )]
public string APIGet(string productName, Product product1, Product product2)

jQuery Ajax Call (Sending 1 object vs 2 objects)

// Common Objects to use in examples below
var Product1 = {};
Product1.Name = 'Name1';
Product1.Description = 'Desc1';
Product1.Price = 1.0;
Product1.Quantity = 1;
Product1.IsDiscontinued = false;
var Product2 = {};
Product2.Name = 'Name2';
Product2.Description = 'Desc2';
Product2.Price = 2.0;
Product2.Quantity = 2;
Product2.IsDiscontinued = false;
//  Sending 1 Object (No Wrapping).
$.ajax({
    type: 'POST',
    url: '/InventoryService/Products/API.Service?Product=ProductXYZ',
    contentType: 'application/json; charset=utf-8',
    dataType: 'json',
    data: JSON.stringify(Product1),  // Here we are sending the 1 object, no wrapper.
    success: function (response)
    {
        // Do Something
    }
});
// Sending 2 Objects (Wrapping)
// Here we are creating a new jsonRequest object that wraps our 2 objects that we want to send to the server.
jsonRequest = { 'product1': Product1, 'product2': Product2 };
$.ajax({
    type: 'POST',
    url: '/InventoryService/Products/API.Service?Product=ProductXYZ',
    contentType: 'application/json; charset=utf-8',
    dataType: 'json',
    data: JSON.stringify(jsonRequest),  // Here we are sending a object that wraps our 2 product objects.
    success: function (response)
    {
        // Do Something
    }
})

GET / POST / PUT / DELETE WCF Services and their jQuery Ajax Calls

GET Method

Service

[WebGet(UriTemplate = "Products/{name}")]
[OperationContract]
public Product GetProduct(string name)
{
	// Get a Product
}

jQuery Call

var name = 'ProductXYZ';

$.ajax({
	type: 'GET',
	url: '/InventoryService/Products/' + name,
	contentType: 'application/json; charset=utf-8',
	dataType: 'json',
	data: '',
	success: function (response)
	{
		// Do Something
	}
});

// Alternative to $.ajax for a GET request is using $.getJSON.
$.getJSON('/InventoryService/' + name, '', function (result) { // Do Something });

This is probably the simplest service call to issue, but the amazing jQuery team made the process even easier with the getJSON() call that wraps up a ajax() GET request. Notice the svc-less call above, there’s no ugly “.svc” telling your consumers… Hay, I’m using WCF!!!

POST Method

Service

[WebGet(UriTemplate = "Products/{name}", Method = "POST")]
[OperationContract]
public Product GetProduct(string name, Product product)
{
	// Insert New Product
}

jQuery Call

var name = 'ProductXYZ';
var Product = {};
Product.Name = 'ProductXYZ';
Product.Description = 'Product XYZ Description';

$.ajax({
	type: 'POST',
	url: '/InventoryService/Products/' + name,
	contentType: 'application/json; charset=utf-8',
	dataType: 'json',
	data: JSON.stringify(Product),
	success: function (response)
	{
		// Do Something
	}
});

This has always been my go to method for sending data to a service, but according to a few RESTful people this method is only supposed to be used for inserts. I’m not really sure how “firm” this rule is, so I decided to do all Inserts and Updates with the POST method, since I almost always handle this in the same function call in my DAL. In addition to posting my Object to the service, I also include the primary key (“PK”) in the service path so I can use the IIS logs to show who touched a specific record.

PUT Method

Service

[WebInvoke(UriTemplate = "Products/{name}", Method = "PUT")]
[OperationContract]
public Product GetProduct(string name)
{
	// Update Product
}

jQuery Call

var name = 'ProductXYZ';
var Product = {};
Product.Name = 'ProductXYZ';
Product.Description = 'Product XYZ Description';

$.ajax({
	type: 'PUT',
	url: '/InventoryService/Products/' + name,
	contentType: 'application/json; charset=utf-8',
	dataType: 'json',
	data: JSON.stringify(Product),
	success: function (response)
	{
		// Do Something
	}
});

I don’t use this much, but when I do the calls look exactly the same as a POST except for the change in request type.

DELETE Method

Service

[WebGet(UriTemplate = "Products/{name}", Method = "DELETE")]
[OperationContract]
public bool GetProduct(string name)
{
	// Delete Product
}

jQuery Call

var name = 'ProductXYZ';

$.ajax({
	type: 'DELETE',
	url: '/InventoryService/Products/' + name,
	contentType: 'application/json; charset=utf-8',
	dataType: 'json',
	data: '',
	success: function (response)
	{
		// Do Something
	}
});

Since I always like “confirmation” when something has been done, I always return a boolean from my delete services. This allows me to confirm something was able to be deleted, if you expect to have various failure states you might want to consider using a string return type to provide a detailed “error” message whey the request failed (e.g. “Successfully Deleted”, “You don’t have permission.”, “Product is associated with other orders.”).

ASP.NET 4.0 WCF RESTful Services – Part 1

Details, details, details… I just spend a few days moving over to svc-less WCF services with a ASP.NET 4.0 web application, and boy was it fun… The overall setup is pretty easy, once you’ve painfully gone through the process “a few” times. Since this is something I’ll no be doing by default on all new projects, I thought this would make a great write-up tutorial. During my discovery and learning phase, I found a bunch of helpful blog posts but nothing was 100% and there was lots of those “important” bits missing. Since most of my projects consume JSON, I plan to do a 4 part series on setting up a Web Application to support WCF services that will be called via jQuery.

Enough with the background, let’s start by creating a new “ASP.NET Web Application”.

What project to choose in VS 2010

  1. Remove everything inside the Scripts folder.
  2. Right Click on Project, and choose “Manage NuGet Packages…”

    Hopefully your familiar with NuGet, it’s a package manager for Visual Studio and you can install it by going to Tools -> Extension Manager… -> Online Gallery -> Download “NuGet Package Manager”. This is the most popular extension and it allows you to quickly add bits to your projects (e.g. Install jQuery, and be notified when a new version is available.).

  3. Use search to install the following NuGet Packages.
    • jQuery
    • JSON-js json2

    Required NuGet Packages

  4. Edit your “site.master” to and include your new scripts:
    	   <script src="Scripts/jquery-1.7.2.js" type="text/javascript"></script>
    	   <script src="Scripts/jquery-ui-1.8.19.js" type="text/javascript"></script>
    	   <script src="Scripts/jquery.validate.js" type="text/javascript"></script>
    	
  5. Also in the “site.master”, edit your asp:Menu to include 4 pages called Part1 – Part4:
                    <asp:Menu ID="NavigationMenu" runat="server" CssClass="menu" EnableViewState="false" IncludeStyleBlock="false" Orientation="Horizontal">
                        <Items>
                            <asp:MenuItem NavigateUrl="~/Default.aspx" Text="Home"/>
                            <asp:MenuItem NavigateUrl="~/Part1.aspx" Text="Part 1"/>
                            <asp:MenuItem NavigateUrl="~/Part2.aspx" Text="Part 2" />
                            <asp:MenuItem NavigateUrl="~/Part3.aspx" Text="Part 3" />
                            <asp:MenuItem NavigateUrl="~/Part3.aspx" Text="Part 4" />
                            <asp:MenuItem NavigateUrl="~/About.aspx" Text="About" />
                        </Items>
                    </asp:Menu>
    	

    ** We are only going to use Part1.aspx right now, but I plan on 4 posts on this topic.

  6. Add 4 new “Web Form using Master Page” to the project called Page1.aspx, Page2.aspx, Page3.aspx, Page4.aspx.
    ** These will match the named used in the navigation menu in the site.master.
  7. Add a new folder to the root of the project called Service.
  8. Add a new “AJAX-enabled WCF Service” to the Service folder called “InventoryService.svc”.
    • Note, this will add the following references to your project.
      	     System.Runtime.Serialization
      	     System.ServiceModel
      	     System.ServiceModel.Web
      	
    • It will also add the following lines to your Web.config
      	  <system.webServer>
      		 <modules runAllManagedModulesForAllRequests="true"/>
      	  </system.webServer>
      
      	  <system.serviceModel>
      		<behaviors>
      		  <endpointBehaviors>
      			<behavior name="RESTfulWCF.Service.Service1AspNetAjaxBehavior">
      			  <enableWebScript />
      			</behavior>
      		  </endpointBehaviors>
      		</behaviors>
      		<serviceHostingEnvironment aspNetCompatibilityEnabled="true"
      		  multipleSiteBindingsEnabled="true" />
      		<services>
      		  <service name="RESTfulWCF.Service.Service1">
      			<endpoint address="" behaviorConfiguration="RESTfulWCF.Service.Service1AspNetAjaxBehavior"
      			  binding="webHttpBinding" contract="RESTfulWCF.Service.Service1" />
      		  </service>
      		</services>
      	  </system.serviceModel>
      	
    • Change the “system.serviceModel” section of your web.config to the following.
      	    <system.serviceModel>
      	        <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
      	        <standardEndpoints>
      	            <webHttpEndpoint>
      	                <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>
      	            </webHttpEndpoint>
      	        </standardEndpoints>
      	    </system.serviceModel>
      	

      Web.Config Key Points

      	// This allows your services to run under your sites app pool, giving access
      	// to your HttpContext.
      	aspNetCompatibilityEnabled="true"
      	
      	// See below, this allows you to type <service>/help to get information on your service	
      	helpEnabled="true"
      	
      	// This is AWESOME, this tag allows your service to respond in any format you 
      	// specify in your request (e.g. XML, JSON, etc...).
      	automaticFormatSelectionEnabled="true"
      	

      ** One last item to note, every time you add an additional AJAX service, it will edit your web.config and put back in the bad configuration. I strongly suggest you make a backup of your web.config, incase you run into problems in the future!!!

    • Manually add a project reference to System.ServiceModel.Activation.
  9. At this point, your project in solution explorer should look like this:

    Items Included in Solution Explorer

  10. Now, open InventoryService.svc and make the following changes:
    • Erase everything and add the following C# code:
      using System.ServiceModel;
      using System.ServiceModel.Activation;
      using System.ServiceModel.Web;
      
      namespace RESTfulWCF.Service
      {
          [ServiceContract(Namespace = "")]
          [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
          public class InventoryService
          {
              [WebGet(UriTemplate = "Part1")]
              [OperationContract]
              public string Part1GetRequest()
              {
                  return "I did work";
              }
          }
      }
      	

      In the code above, we are mapping the function “Part1GetRequest” to the name “Part1”. This will allow us to call the service with the following syntax “/InventoryService/Part1” using a GET request.

  11. Add a route to call the service without referencing a “.svc” file.
    Open Global.asax and replace your Applicaiton_Start with the following.

            void Application_Start(object sender, EventArgs e)
            {
                // Code that runs on application startup
                RouteTable.Routes.Add(new ServiceRoute("InventoryService", new WebServiceHostFactory(), typeof(Service.InventoryService)));
            }
    	
  12. Now we are ready to create a jQuery test call, open “Page1.aspx” in Source View:

    Erase everything and add the following HTML code

    <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Part1.aspx.cs" Inherits="RESTfulWCF.Part1" %>
    <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server">
    </asp:Content>
    <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
        <script type="text/javascript">
            $(document).ready(function ()
            {
                $('#doWork').click(function ()
                {
                    $.getJSON('/InventoryService/Part1', '', function (result) { alert(result); });
                });
            });
        </script>
        <h2>
            ASP.NET 4.0 WCF RESTful Demo - Part 1
        </h2>
        <p>
            <input type="button" id="doWork" value="Issue GET Request" />
        </p>
    </asp:Content>
    	

    We are able to use the $.getJSON() jQuery function because we applied the “WebGet” attribute to the WCF function “Part1GetRequest()”.

  13. Launch the application and navigate to “Page 1”. Click on the “Issue GET Request” button and you should see the followign results:

    jQuery Demo Results

  14. To get a list of all the functions your service offers, pass the “/help” parameter to your service.

    List of all WCF Service Accessible by Passing /help to the Service

As noted, here is the project I built above following all the steps one-by-one.
    >> ASP.NET 4.0 RESTful Web Application Project

Making an Editiable Table (CRUD) with jQuery

I’ve been working on a form that needs to support a unknown number of rows, that the user will either enter 1-by-1 or bulk upload. Adding a few rows to a table is pretty easy in HTML, but as the rows increase page starts to slow down and the UI can become overwhelming with HTML input controls. I decided to test a few concepts to determine what would best fit my form, I started with doing rows of input controls, followed by trying to use jqGrid plug-in. I really liked jqGrid, but as I tried to get everything working I found myself having to “work around” lots of issues (read forum, apply fix, something else breaks, repeat), so I gave up after making the solution working 90% in jqGrid because the code was already a lot more complex than what I wanted. In the end, I decided that building my own table editor that supported CRUD with jQuery. In addition to following the KISS rule, I also had a list of goals I wanted to include.

Solution Goals

  1. Allow users to add 1 to 2,000 rows
  2. Keep the page quick when working with 500+ rows
  3. Make all edits in memory
  4. Create a Undo/Cancel button to undo a edit
  5. Capture dynamic HTML TABLE row contents for use in a server side postback
  6. (Not in DEMO) Enable validation for rows in Edit Mode
  7. (Not in DEMO) Enable default values for manually added rows

All of the goals above were in the final solution and 95% of the work is done client-side in ~300 lines of jQuery & JavaScript code. I choose to use jQuery templates for the rows, since it offers a simple model for merging data/HTML along with some advanced features to perform logic in how elements are rendered (e.g. If my Cross object has a Status set, it will display an alert icon on the row and notify the user something in wrong). Since most of these other features were case specific, I left them out of the demo to focus on doing the basic CRUD in HTML and how I got the dynamic rows back to ASP.NET

Final Product

My solution was designed to leverage ASP.NET, but all of the code below is 100% HTML.  You can take this code and apply it to any table and choose to leverage any server technology you want.  Part of step 5 is ASP.NET specific, but this shows a neat trick for getting the HTML table rows back to the server so you can access them in a traditional ASP.NET postback event.

Step 1: Prerequisites (Accessible via CDN)

  1. jQuery
  2. jQuery tmpl plug-in
  3. JSON.org library
<!-- jQuery on GOOGLE CDN -->
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<!-- JSON.org on CDNJS CDN -->
<script type="text/javascript" src="http://ajax.cdnjs.com/ajax/libs/json2/20110223/json2.js"></script>
<!-- jQuery tmpl() plug-in on ASPNET CDN -->
<script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js"></script>

Step 2: HTML Layout

<body>
    <h1>
        CRUD My Table</h1>
    <!-- Table where we will perform CRUD operations, data loaded via jQuery tmpl() -->
    <table id="CRUDthisTable" class="mediumTable">
        <thead>
            <tr class="rowHeader">
                <th></th>
                <th>Change Type </th>
                <th>Update Type </th>
                <th>Customer Part </th>
                <th>ROHM Part </th>
                <th>Rank Start </th>
                <th>Rank End </th>
                <th>Price </th>
                <th>UOM </th>
                <th>Apply Date </th>
                <th>Remarks </th>
            </tr>
        </thead>
        <tbody>
        </tbody>
    </table>
    <!-- jQuery tmpl() Templates -->
    <!-- Edit Rows -->
    <script id="editRowTemplate" type="text/x-jquery-tmpl">
        <tr class="editRow">
            <td>
                <span id="cancel" name="cancel" tooltip="Cancel" class="ui-icon ui-icon-close CancelRow">Cancel</span>
                <span id="save" name="save" tooltip="Save" class="ui-icon ui-icon-disk SaveRow">Save</span>
            </td>
            <td>
                <select id="field1" name="field1" class="changeType">
                    <option></option>
                    <option>All</option>
                    <option>Part</option>
                    <option>Price</option>
                </select></td>
            <td>
                <select id="field2" name="field2" class="updateType">
                    <option></option>
                    <option>Add</option>
                    <option>Update</option>
                    <option>Delete</option>
                </select></td>
            <td>
                <input type="text" id="field3" name="field3" class="customerPart required part" value="${CustomerPart}" /></td>
            <td>
                <input type="text" id="field4" name="field4" class="rohmPart validROHMpart part" value="${ROHMPart}" /></td>
            <td>
                <input type="text" id="field5" name="field5" class="rankStart rank" value="${RankStart}" /></td>
            <td>
                <input type="text" id="field6" name="field6" class="rankEnd rank" value="${RankEnd}" /></td>
            <td>
                <input type="text" id="field7" name="field7" class="price required number" value="${Price}" /></td>
            <td>
                <select  id="field8" name="field8" class="uomType required">
                    <option></option>
                    <option>1</option>
                    <option>1000</option>
                </select></td>
            <td>
                <input type="text" id="field9" name="field9" class="applyDate required date" value="${ApplyDate}" /></td>
            <td>
                <input type="text" id="field10" name="field10"class="remarks" value="${Remarks}" /></td>
        </tr>
    </script>
    <!-- View Rows -->
    <script id="viewRowTemplate" type="text/x-jquery-tmpl">
        <tr>
            <td style="width:50px;">
                <span id="edit" name="edit" title="Edit" class="ui-icon ui-icon-pencil EditRow">Edit</span>
                <span id="delete" name="delete" title="Delete" class="ui-icon ui-icon-trash DeleteRow">Delete</span>
            </td>
            <td style="width:120px;">${ChangeType}</td>
            <td style="width:120px;">${UpdateType}</td>
            <td>${CustomerPart}</td>
            <td>${ROHMPart}</td>
            <td style="width:45px;">${RankStart}</td>
            <td style="width:45px;">${RankEnd}</td>
            <td>${Price}</td>
            <td style="width:64px;">${UOM}</td>
            <!-- **** TIP: Here we use a function to format the date mm/dd/yyyy -->
            <td style="width:80px;">${FormatDate(ApplyDate)}</td>
            <td>${Remarks}</td>
        </tr>
    </script>
</body>

Step 3: Example Loading Data ( NO CRUD Functionality )

        // Helper Function to Format Date in View Row
        function FormatDate(date)
        {
            return date.getMonth() + 1 + "/" + date.getDate() + "/" + date.getFullYear();
        }

        // After the DOM has loaded, take the sample data and inject it into the table using the View Row template.
        $(document).ready(function ()
        {
            // Sample Data - Could be returned via AJAX or could be manual rows added to the TABLE
            var crosses = [
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SS355TE-17", "ROHMPart": "1SS355TE-17", "RankStart": "", "RankEnd": "", "Price": 0.0151, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RB160M-60TR", "ROHMPart": "RB160M-60TR", "RankStart": "", "RankEnd": "", "Price": 0.0605, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RR264M-400TR", "ROHMPart": "RR264M-400TR", "RankStart": "", "RankEnd": "", "Price": 0.031, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "1SR154-400TE25", "ROHMPart": "1SR154-400TE25", "RankStart": "", "RankEnd": "", "Price": 0.0309, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null },
            { "ChangeType": "All", "UpdateType": "Add", "CustomerPart": "RF071M2STR", "ROHMPart": "RF071M2STR", "RankStart": "", "RankEnd": "", "Price": 0.0638, "UOM": 1, "ApplyDate": new Date(1335337200000), "Remarks": "", "Status": null, "StatusNote": null}];

            if (crosses) {
                $("#viewRowTemplate").tmpl(crosses).appendTo("#CRUDthisTable");
            }
        });

CRUD Table – Data Loaded no CRUD Functions Activated

Demo of 100% CRUD Table in HTML ( no persistance or CRUD features activated )

** As you can see, I’m not using images links similar to the final product since I was aiming for simplicity.  If you want to use images, I suggest you use the jQuery UI icons as I did in the final product, they can easily be added to a span by adding two class values (e.g. class=”ui-icon ui-icon-close”).

Step 4: Enable CRUD

        // Global Parameters
        var rowNum = 1;
        var rowRemovedNum;
        var rowRemovedContents;

        // Read a row in Edit Mode into a Cross Object
        function GetEditRowObject()
        {
            var row = $('#CRUDthisTable tbody tr.editRow');

            var cross = {};

            cross.ChangeType = row.find('.changeType').val();
            cross.UpdateType = row.find('.updateType').val();
            cross.CustomerPart = row.find('.customerPart').val();
            cross.ROHMPart = row.find('.rohmPart').val();
            cross.RankStart = row.find('.rankStart').val();
            cross.RankEnd = row.find('.rankEnd').val();
            cross.Price = row.find('.price').val();
            cross.UOM = row.find('.uomType').val();
            var dateString = row.find('.applyDate').val();
            cross.ApplyDate = new Date(dateString);
            cross.Remarks = row.find('.remarks').val();

            return cross;
        }

        // Read a row in View Mode into a Cross Object
        function GetViewRowObject(rowNum)
        {
            var row = $('#CRUDthisTable tbody tr').eq(rowNum);

            var cross = {};

            cross.ChangeType = row.find('td:eq(1)').text();
            cross.UpdateType = row.find('td:eq(2)').text();
            cross.CustomerPart = row.find('td:eq(3)').text();
            cross.ROHMPart = row.find('td:eq(4)').text();
            cross.RankStart = row.find('td:eq(5)').text();
            cross.RankEnd = row.find('td:eq(6)').text();
            cross.Price = row.find('td:eq(7)').text();
            cross.UOM = row.find('td:eq(8)').text();
            cross.ApplyDate = row.find('td:eq(9)').text();
            cross.Remarks = row.find('td:eq(10)').text();

            return cross;
        }

        // Read all rows into Cross Object Array
        function GetAllViewRowsAsCrossObjects()
        {
            var crossTableRows = [];

            $('#CRUDthisTable tbody tr').each(function (index, value)
            {
                var row = GetViewRowObject(index);
                crossTableRows.push(row);
            });

            return crossTableRows;
        }

        // Check if any rows are in Edit Mode
        function IsExistingRowInEditMode()
        {
            var rowsInEditMode = $('#CRUDthisTable tbody tr.editRow').length;

            if (rowsInEditMode &gt; 0) {
                alert('You have a row in Edit mode, please save or cancel the row changes before you continue.');
                return true;
            }

            return false;
        }

        // After the DOM has loaded, bind the CRUD events
        $(document).ready(function ()

    // Events
    $('.AddRow').click(function()
    {
        if (IsExistingRowInEditMode())
            return;

        rowRemovedNum = 0;

        var data = { data: 1 };
        var output = $("#editRowTemplate").tmpl(data).html()

        $('#CRUDthisTable tbody').prepend('&lt;tr class="editRow"&gt;' + output + '&lt;/tr&gt;');
        var $rowEdit = $('#CRUDthisTable tbody tr.editRow');

        // Defaults //
        var changeTypeDefualt = $('#ChangeTypeDefualt').val();
        var updateTypeDefault = $('#UpdateTypeDefault').val();
        var uomDefault = $('#UOMDefault').val();
        var applyDateDefault = $('#ApplyDateDefault').val();

        var changeType = $rowEdit.find('.changeType');
        $(changeType).val(changeTypeDefault);
        var updateType = $rowEdit.find('.updateType');
        $(updateType).val(updateTypeDefault);
        var uomType = $rowEdit.find('.uomType');
        $(uomType).val(uomDefault);
        var applyDate = $rowEdit.find('.applyDate');
        $(applyDate).val(applyDateDefault);

        $('#CRUDthisTable tbody tr:first')[0].scrollIntoView();
    });

    $('.EditRow').live('click', function(e)
    {
        if (IsExistingRowInEditMode())
            return;

        var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

        var data = GetViewRowObject(row);

        var output = $("#editRowTemplate").tmpl(data).html()

        rowRemovedNum = row;
        rowRemovedContents = $('#CRUDthisTable tbody tr').eq(row).html();

        $('#CRUDthisTable tbody tr').eq(row).after('&lt;tr class="editRow"&gt;' + output + '&lt;/tr&gt;');

        var changeTypeDefualt = $('#ChangeTypeDefualt').val();
        var updateTypeDefault = $('#UpdateTypeDefault').val();
        var uomDefault = $('#UOMDefault').val();
        var applyDateDefault = $('#ApplyDateDefault').val();

        var $editRow = $('#CRUDthisTable tbody tr.editRow');

        var changeType = $editRow.find('.changeType');
        $(changeType).val(data.ChangeType);

        var updateType = $editRow.find('.updateType');
        $(updateType).val(data.UpdateType);

        var uomType = $editRow.find('.uomType');
        $(uomType).val(data.UOM);

        $('#CRUDthisTable tbody tr').eq(row).remove();
    });

    $('.SaveRow').live('click', function(e)
    {
        var isValid = ValidateNestedControls("#CRUDthisTable");

        // Good place to add validation, don't allow save until the row has valid data!
        // if (!isValid)
        //     return;

        var savedData = GetEditRowObject();

        var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

        var output = $("#viewRowTemplate").tmpl(savedData).html();

        var tableRows = $('#CRUDthisTable tbody tr').length;

        if (tableRows == 0 || row == 0) {
            $('#CRUDthisTable tbody').prepend('&lt;tr&gt;' + output + '&lt;/tr&gt;');
        }
        else {
            $('#CRUDthisTable tbody tr').eq(row).before('&lt;tr&gt;' + output + '&lt;/tr&gt;');
        }

        $('#CRUDthisTable tbody tr').eq(row + 1).remove();
    });

    $('.CancelRow').live('click', function(e)
    {
        var row = $(this).parent().parent().parent().children().index($(this).parent().parent());

        $('#CRUDthisTable tbody tr').eq(row).remove();

        var tableRows = $('#CRUDthisTable tbody tr').length;

        if (rowRemovedContents) {
            if (tableRows == 0 || row == 0) {
                $('#CRUDthisTable tbody').prepend('&lt;tr&gt;' + rowRemovedContents + '&lt;/tr&gt;');
            }
            else {
                $('#CRUDthisTable tbody tr').eq(row).before('&lt;tr&gt;' + rowRemovedContents + '&lt;/tr&gt;');
            }
        }

        rowRemovedContents = null;
    });

    $('.DeleteRow').live('click', function(e)
    {
        e.preventDefault;
        $(this).parent().parent().remove();
    });
Table is shows the results of clicking the Add New Row in the Table Footer

Step 5: Ajax POST Table Contents to the Server (before button event)

There is a ton of ways to do this, but my goal was to allow users to edit the table and when they were all done with all their edits they could hit “Save” and everything would then be written to the DB. Since ASP.NET doesn’t give you access to dynamic table rows, I bound a AJAX post event to the “Save” button that sends the table contents to the server, stores in cache, and then uses the cache in the traditional postback “Save” event.

        // After the DOM has loaded, bind the ASP.NET save button
        $(document).ready(function ()
            $('#&lt;%= btnSave.ClientID %&gt;').click(function (e) {
                return PostTable();
            });
        }

        // Post all rows to the server and put into Cache
        function PostTable()
        {
            // Normally I'll get the ID from the QueryString, but it could also be grabbed from a hidden element in the form.
            var crossId = 1;
            var jsonRequest = { crosses: GetAllViewRowsAsCrossObjects(), crossId: crossId };

            $.ajax({
                type: 'POST',
                url: 'Demo.aspx/CacheTable',
                data: JSON.stringify(jsonRequest),
                contentType: 'application/json; charset=utf-8',
                dataType: 'json',
                success: function (data, text)
                {
                    return true;
                },
                error: function (request, status, error)
                {
                    return false;
                }
            });
        }

Important Note: If you want to access a page method via jQuery $.ajax(), then you must make the function static and pass the case sensitive parameters with the expected data type(s) in the ajax call.

    public partial class Demo: System.Web.UI.Page
    {
        private static string _cacheKey = "CacheTable_" + HttpContext.Current.User.Identity.Name;

        [WebMethod]
        public static void CacheTable(List&lt;Cross&gt; crosses, int crossId)
        {
            if (crosses != null &amp;&amp; crosses.Count &gt; 0)
            {
                HttpContext.Current.Cache.Remove(_cacheKey);
                HttpContext.Current.Cache.Insert(_cacheKey, crosses, null, DateTime.Now.AddSeconds(3600), Cache.NoSlidingExpiration);
            }
        }
    }

    // Custom Data Transfer Object (DTO)
    public class Cross
    {
        public string ChangeType { get; set; }
        public string UpdateType { get; set; }
        public string CustomerPart { get; set; }
        public string ROHMPart { get; set; }
        public string RankStart { get; set; }
        public string RankEnd { get; set; }
        public double Price { get; set; }
        public int UOM { get; set; }
        public DateTime ApplyDate { get; set; }
        public string Remarks { get; set; }
        public string Status { get; set; }
        public string StatusNote { get; set; }
    }

Working Demo of using jQuery to allow CRUD edits to a HTML TABLE.

ASP.NET Note **If you run into issues on the amount of rows you can postback to the server in ASP.NET via AJAX & JSON, you’ll need to edit your “maxJsonLength” in your web.config.

    <system.web.extensions>
        <scripting>
            <webServices>
                <jsonSerialization maxJsonLength="2097152"/>
            </webServices>
        </scripting>
    </system.web.extensions>

Password Generator and Callsign Spelling with C#

If you need help building a random password, you should check out pctools.com random password generator. It’s great, it gives you a lot of options and you can have it generate a list of 50 passwords with their callsign spellings in seconds. I’ve found the callsign spelling to be very helpful for remembering and recognizing all the characters in a new password I generate. I liked this solution so much, I decided port this concept over to C# with a set of helpers. This can be used anywhere you want to generate a password, I am currently using it in a ASP.NET LOB app to suggest and show better passwords options.

Generating Random Passwords

using System;
using System.Collections.Generic;
using System.Linq;

public static class StringHelper
{
    // Shared Static Random Generator
    private static readonly Random CommonRandom = new Random();

    public static string GenerateRandomPassword(int passwordLength, bool canRepeatCharacters = false)
    {
        char[] chars = "$%#@!*abcdefghijklmnopqrstuvwxyz1234567890?;:ABCDEFGHIJKLMNOPQRSTUVWXYZ^&".ToCharArray();

        string randomPassword = string.Empty;

        for (int l = 0; l < passwordLength; l++)
        {
            int x = CommonRandom.Next(1, chars.Length);

            if (canRepeatCharacters || !randomPassword.ToCharArray().Any(ch => ch == chars[x]))
                randomPassword += chars[x].ToString();
            else
                l--;
        }

        return randomPassword;
    }

    public static List<string> GenerateRandomPasswords(int quantity, int passwordLength = 8)
    {
        List<string> passwords = new List<string>();

        for (int i = 0; i < quantity; i++)
        {
            passwords.Add(GenerateRandomPassword(passwordLength));
        }

        return passwords;
    }
}

There are a few options on the generator, like not repeating a character and configuring the password length. In addition to the main method, I also have a helper that also returns a list of multiple passwords, to return in a list to give your users options.

Important note, Random() is not very random when your making quick consecutive calls. If you want to call Random() in a loop, you should move your instance of the Random() class to the outside to prevent duplicate seeds, which will result in duplicate passwords.

public static string GetCallsignSpelling(string password)
{
	if (string.IsNullOrEmpty(password))
		return string.Empty;

	Dictionary<char, string> callsigns = new Dictionary<char, string>()
		{
			{'$',"Dollar Symbol"},
			{'%',"Percent Symbol"},
			{'#',"Number Symbol"},
			{'@',"At Symbol"},
			{'!',"Exclamation Symbol"},
			{'*',"Asterisk Symbol"},
			{'a',"alpha"},
			{'b',"bravo"},
			{'c',"charlie"},
			{'d',"delta"},
			{'e',"echo"},
			{'f',"foxtrot"},
			{'g',"golf"},
			{'h',"hotel"},
			{'i',"india"},
			{'j',"juliet"},
			{'k',"kilo"},
			{'l',"lima"},
			{'m',"mike"},
			{'n',"november"},
			{'o',"oscar"},
			{'p',"papa"},
			{'q',"quebec"},
			{'r',"romeo"},
			{'s',"sierra"},
			{'t',"tango"},
			{'u',"uniform"},
			{'v',"victor"},
			{'w',"whiskey"},
			{'x',"xray"},
			{'y',"yankee"},
			{'z',"zulu"},
			{'1',"One"},
			{'2',"Two"},
			{'3',"Three"},
			{'4',"Four"},
			{'5',"Five"},
			{'6',"Six"},
			{'7',"Seven"},
			{'8',"Eight"},
			{'9',"Nine"},
			{'0',"Zero"},
			{'?',"Question Symbol"},
			{';',"SemiColon Symbol"},
			{':',"Colon Symbol"},
			{'A',"ALPHA"},
			{'B',"BRAVO"},
			{'C',"CHARLIE"},
			{'D',"DELTA"},
			{'E',"ECHO"},
			{'F',"FOXTROT"},
			{'G',"GOLF"},
			{'H',"HOTEL"},
			{'I',"INDIA"},
			{'J',"JULIET"},
			{'K',"KILO"},
			{'L',"LIMA"},
			{'M',"MIKE"},
			{'N',"NOVEMBER"},
			{'O',"OSCAR"},
			{'P',"PAPA"},
			{'Q',"QUEBEC"},
			{'R',"ROMEO"},
			{'S',"SIERRA"},
			{'T',"TANGO"},
			{'U',"UNIFORM"},
			{'V',"VICTOR"},
			{'W',"WHISKEY"},
			{'X',"XRAY"},
			{'Y',"YANKEE"},
			{'Z',"ZULU"},
			{'^',"Caret Symbol"},
			{'&',"Ampersand Symbol"}
		};

	char[] wordCharacters = password.ToCharArray();

	string callsignSpelling =
		wordCharacters.Aggregate(string.Empty,
									 (current, passwordCharacter) =>
									 current + (callsigns[passwordCharacter] + " - ")).TrimEnd(' ', '-');

	return callsignSpelling;
}

The spelling is done using a Key/Value dictionary, and iterating over each character of the password one-by-one.

The result of using these two helpers is below.

JSON to HTML Form Using jQuery dForm Plug-in

In a previous posts, I’ve showed how to go from a JSON array of objects/values, to a HTML table. This is great when you want to display a bunch of data in column and rows, but what happens if you want to interact with the data. No problem, there is the jQuery dForm plug-in for that. In order to generate the form, you’ll need to redesign your server side / inline objects to provide the required rendering data for dForms. It’s pretty straight forward and there is a ton of options with the plug-in, see here.

Demo Screenshot

In the image above, we use an inline object that will feed the dForm “buildForm()” method. I’m using a static value here, but you could have easily setup dForm to load the data remotely using ajax.

Here is the jQuery dForm 0.1.3 Plug-in Demo used in the screenshot above, which includes everything you need to get up and running fast. If you have problems making dForm work, make sure you have included “ALL” the dForm libraries. The dForm.js file has dependencies on the other libraries, if you don’t load the correct libraries, you’ll end up with a blank form.

Reusable ASP.NET MessageBox UserControl

All systems need a way to share status messages with a user, without giving them feedback they don’t what has or what will happen. There are so many different ways to send an alert (modal window, JavaScript alert, inline HTML, etc…), regardless of what you use it always help to be consistent. I built this control based on a few different ideas/examples a long time ago, and I seem to find more uses for it all the time. You can call it from the server or from the client using JavaScript, making it the perfect single solution “notification” solution.

Here is an example of what it looks like.

ASP.NET MessageBox User Control Screenshot

** The text after the heading, is the code that was used to trigger the message box.

The control is pretty straight forward, on the server it works by “re-injecting” the rendered control when you call Show function. Since the control does not use viewstate, every time you post back to the server and call show again, the last message disappears and the new message is displayed. If you want the message to disappear “automatically” after a few seconds, you can can set the timeout in milliseconds. On the client (via JavaScript), you can create a simple function that will provide access to throwing alerts from the client without a postback.

Client Side Example

<script type="text/javascript">
	function ShowSuccess(message) {
		$alert = $('#MBWrapper1');

		$alert.removeClass().addClass('MessageBoxInterface');
		$alert.children('p').remove();
		$alert.append('<p>' + message + '</p>').addClass('successMsg').show().delay(8000).slideUp(300);
	}
</script>

Server Side Example

public partial class _Default : System.Web.UI.Page
{
	protected void Success_Click(object sender, EventArgs e)
	{
		MessageBox1.ShowSuccess("Success, page processed.", 5000);
	}
}

ASP.NET MessageBox User Control – Full Working Demo

Download and try the demo, if you like what you see… Here is what you need to include to make the control work in your project.

  • MessageBox User Control (ASMX & CS)
  • jQuery
  • IMessageBox.css stylesheet
  • All the graphics from the Images folder

** Note: If you move the images into a different directory, you’ll need to update the CSS to use the correct path to the images.

ASP.NET Lists Elements and Client Side Updates

A common issue I see people run into with Web Forms is the inability for client side changes to LIST elements (ListBox, DropDownList, etc…) to be passed back during POSTBACK. This is due to the how the viewstate is assigned/process, and there is no simple “hack” to bypass this. As for other non-list elements that typically implement a Text property, this isn’t an issue. Here are 3 common ways to get around the limitation in seeing changes to your LIST elements that were made on the client using jQuery/JavaScript.

1. Create a hidden field (that has a text property)

<select size="5" id="users">
    <option value="Kelly">Kelly</option>
    <option value="Kylee">Kylee</option>
    <option value="Nathan">Nathan</option>
</select>

<input type="hidden" id="userList" runat="server" value="Kelly,Kylee,Nathan"/>

Here is how you would add/parse the client side change.

var $users = $('#users');
var $usersList = $('#usersList');

$users.append($('<option></option>').val('New Entry Value').html('New Entry Text'));

var newUserList = '';
var $options = $('#users').children('option');

$.each($options, function(index, value) {
	newUserList = newUserList + $options[index].value + ',';
}); 			

$usersList.val(newUserList.substr(0, newUserList.length-1));

** In this example, you maintain a hidden input element that matches the items in the select list. When you change the select list in JavaScript, you change the value of the hidden element. During post back, you read and parse this list to make your updates.

2. Wrap your lists in an update panel

<asp:UpdatePanel ID="UpdateRoles" runat="server">
    <ContentTemplate>
        <table>
            <tr>
                <td style="height: 180px; vertical-align: top;">
                    Active Roles<br />
                    <asp:ListBox ID="lstAdd" runat="server" Width="300px" OnSelectedIndexChanged="lstAdd_SelectedIndexChanged" Rows="10"></asp:ListBox>
                </td>
                <td style="height: 180px">
                    <asp:Button ID="AddRole" runat="server" Text="->" OnClick="AddRole_Click1" />
                    <br />
                    <asp:Button ID="RemoveRole" runat="server" Text="<-" OnClick="RemoveRole_Click" />
                </td>
                <td style="height: 180px; vertical-align: top;">
                    InActive Roles<br />
                    <asp:ListBox ID="lstRemove" runat="server" Width="300px" OnSelectedIndexChanged="lstRemove_SelectedIndexChanged" Rows="10"></asp:ListBox>
                </td>
            </tr>
        </table>
    </ContentTemplate>
</asp:UpdatePanel>

Here are the events that are fired when you click the Add / Remove buttons the move items between the lists.

protected void AddRole_Click1(object sender, EventArgs e)
{
	if (lstAdd.SelectedIndex > -1)
	{
		lstRemove.Items.Add(lstAdd.SelectedItem);
		lstAdd.Items.RemoveAt(lstAdd.SelectedIndex);
	}
}

protected void RemoveRole_Click(object sender, EventArgs e)
{
	if (lstRemove.SelectedIndex > -1)
	{
		lstAdd.Items.Add(lstRemove.SelectedItem);
		lstRemove.Items.RemoveAt(lstRemove.SelectedIndex);
	}
}


** You could fire the events using any type of event, in the sample I highlighting a item in one of the two lists and click Add/Remove to switch the items between the lists.

3. Post back using Ajax

This is currently my preferred approach, since it ride on top of jQuery Ajax and follows my normal design approach to building widgets (ajax load / add / edit / delete). Using this approach you can do real-time updates as they edit the list, or you can post back your entire form including all your list changes without a post back. There are so many options when you start using ajax calls to pass data back and forth between the client and server.

My example below is using a WCF service, you can also use ASMX to do this in earlier versions of .NET. This solution has a lot more steps, but it really encompasses a whole “form submit” vs. propagating your LIST changes to the server.

To start, I also build some simple objects to pass between the client & server. The objects go both ways and are defined as a DataContract in C#.

[DataContract]
public class Member
{
	[DataMember]
	public int MemberId;

	[DataMember]
	public string CustomerCode;

	[DataMember]
	public string DelToCode;
}

[DataContract]
public class Rule
{
	[DataMember]
	public int RuleId;

	[DataMember]
	public string Name;

	[DataMember]
	public bool Enabled;

	[DataMember]
	public List&lt;Member&gt; Members;
}

One we have an opject to pass, we define a method that will use this object.

[OperationContract]
[WebInvoke(
	RequestFormat = WebMessageFormat.Json,
	ResponseFormat = WebMessageFormat.Json,
	BodyStyle = WebMessageBodyStyle.WrappedRequest)
]
bool UpdateRule(Rule rule);

public bool UpdateRule(Rule rule)
{
	// Do Update Using Object
}

Next we build a form that will be used with our server, everything on the form can be build during page load or it can be populate via Ajax.

<fieldset class="withborder" id="manageRule">
<legend>Rule Update</legend>
<ol>
	<li>
		<label class="left">
			Rule ID<em>*</em></label>
		<input type="text" class="medium disabled" id="ruleId">
	</li>
	<li>
		<label class="left">
			Name<em>*</em></label>
		<input type="text" class="medium" maxlength="5" id="ruleName">
	</li>
	<li>
		<label class="left">
			Enabled<em>*</em></label>
		<input type="checkbox" class="checkBox" id="ruleEnabled" checked="checked">
	</li>
	<li>
		<label class="left">
			Assigned<em>*</em></label>
		<div>
			<div class="dropZone available">
				<ul class="connectedSortable ui-sortable" id="available" style="">
					<li id="MemberId_1" class="ui-state-highlight">Red</li>
					<li id="MemberId_2" class="ui-state-highlight">Green</li>
				</ul>
			</div>
			<div class="dropZone included">
				<ul class="connectedSortable ui-sortable" id="included" style="">
					<li id="MemberId_3" class="ui-state-highlight">Blue</li>
					<li id="MemberId_4" class="ui-state-highlight">Yellow</li>
					<li id="MemberId_5" class="ui-state-highlight">Orange</li>
				</ul>
			</div>
		</div>
	</li>
	<li style="text-align: right;">
		<input type="button" value="Update Rule" text="Add" id="updateRule">
	</li>
</ol>
</fieldset>

Finally, we put it all together with JavaScript function that uses jQuery and JSON.org’s libraries to build and transfer data between the client and sever.

function UpdateRule() {
	var rule;

	rule.RuleId = parseInt($('#ruleId').val());
	rule.Name = $('#ruleName').val();
	rule.Enabled = $('#ruleEnabled').is(':checked');

	var $includedMembers = $('#included > li');

	rule.Members = [];

	// This is where you parse / add the values of your list (listbox, li, etc...)
	if ($includedMembers.length > 0) {
		$.each($includedMembers, function (index, value) {
			var id = $includedMembers[index].id.replace('MemberId_', '');
			var name = $includedMembers[index].firstChild.data.split(' - ');
			var member = { 'CustomerCode': name[0], 'DelToCode': name[1], 'MemberId': id };
			rule.Members.push(member);
		});
	}

	$.ajax({
		url: '/Service/ExampleWCFService.svc/UpdateRule',
		contentType: 'application/json; charset=utf-8',
		dataType: 'json',
		type: 'POST',
		data: JSON.stringify({ 'rule': rule }),
		success: function (rule) {
			// Do Something On Success
		}
	});
}

** Don’t forget, you could have your service store this data in the cache/session/etc… if you don’t want to store it real-time. In my sample, I send the whole form to be process and not just the list. This gets rid of the entire post back process.

Each has it’s own unique requirements, the only one I tend not to use anymore is #2 since it requires the addition of the ASP.NET Ajax Library. Having an additional library and added content in a project for this one feature isn’t worth it to me. I know the library has got more compact, but in general it’s quite heavy (size wise), and since I already “always” include jQuery there really isn’t a need for this baggage.

JSON to CSV

I created a helper a few months back that used DATA URIs to download JSON to CSV, but due to IE’s implementation of DATA URIs (or lack of), it does not work for IE (all versions). Here is the same helper that will just convert the data, which you can use anyway you want (example: in a popup, to display in a modal window, etc…).

<html>
<head>
    <title>Demo - Covnert JSON to CSV</title>
    <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script>
    <script type="text/javascript" src="https://github.com/douglascrockford/JSON-js/raw/master/json2.js"></script>

    <script type="text/javascript">
		// JSON to CSV Converter
        function ConvertToCSV(objArray) {
            var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;
            var str = '';

            for (var i = 0; i < array.length; i++) {
                var line = '';
                for (var index in array[i]) {
                    if (line != '') line += ','

                    line += array[i][index];
                }

                str += line + '\r\n';
            }

            return str;
        }

		// Example
        $(document).ready(function () {

			// Create Object
            var items = [
				  { name: "Item 1", color: "Green", size: "X-Large" },
				  { name: "Item 2", color: "Green", size: "X-Large" },
				  { name: "Item 3", color: "Green", size: "X-Large" }];

			// Convert Object to JSON
			var jsonObject = JSON.stringify(items);

			// Display JSON
            $('#json').text(jsonObject);

			// Convert JSON to CSV & Display CSV
            $('#csv').text(ConvertToCSV(jsonObject));
        });
    </script>
</head>
<body>
    <h1>
        JSON</h1>
    <pre id="json"></pre>
    <h1>
        CSV</h1>
    <pre id="csv"></pre>
</body>
</html>

Here is the output using the script and above.

JSON

Created with “JSON.stringify()” function from json.org.

[{"name":"Item 1","color":"Green","size":"X-Large"},{"name":"Item 2","color":"Green","size":"X-Large"},{"name":"Item 3","color":"Green","size":"X-Large"}]

CSV

Created with “ConvertToCSV()” function I created above.

Item 1,Green,X-Large
Item 2,Green,X-Large
Item 3,Green,X-Large

Compare SubSonic Objects (Columns) for Differences

Awhile back I needed a way to quickly compare a set of SubSonic v2.2 objects ( data properties ) for differences. Since the two source rows of data would be created at different times and have different primary keys, I needed a way to selectively compare columns/properties. My solution, was a function called “ColumnsAreEqual” that I added to “RecordBase.cs” that compares each property value of two SubSonic Objects, while allowing you to set a list of columns/properties to exclude.

/// <summary>
/// Compare values of two SubSonic objects for differences.  You can also pass an exclusion list
/// of columns/properties to exclude,  like a primary key or CreatedOn date.
/// </summary>
/// <param name="comparableObject">Object to Compare</param>
/// <param name="ignoreColumnList">Comma separated list of ColumnNames to ignore</param>
/// <returns></returns>
public bool ColumnsAreEqual(T comparableObject, string ignoreColumnList = null)
{
	if (comparableObject == null)
		return false;

	// If ignore list is set, build array of column names to skip
	string[] ignoreColumnNames = new string[] {};
	if (ignoreColumnList != null && ignoreColumnList.IndexOf(',') > 0)
	{
		ignoreColumnNames = ignoreColumnList.Split(',');
	}

	foreach (TableSchema.TableColumnSetting setting in columnSettings)
	{
		// If there are columns to ignore, check the current ColumnName against ignore list and skip.
		if (ignoreColumnNames.Length > 0)
		{
			bool ignored = false;

			foreach (string ignoreColumnName in ignoreColumnNames)
			{
				if (ignoreColumnName.Trim().ToLower() == setting.ColumnName.ToLower())
				{
					ignored = true;
					break;
				}
			}

			// If this is a ignore column, skip.
			if (ignored)
			{
				continue;
			}
		}

		var before = setting.CurrentValue;
		var after = comparableObject.GetColumnValue<object>(setting.ColumnName);

		// If both are null, then they are equal
		if (before == null && after == null)
			continue;

		// If one is null then they are not equal
		if (before == null || after == null)
			return false;

		// Compare two non-null objects
		if (!before.Equals(after))
			return false;
	}

	return true;
}

// Example Usage
bool areEqual = before.Packaging.ColumnsAreEqual(after.Packaging,"PackagingId,CreatedOn,ModifiedOn");

Optional Column Updates with your Stored Procedures

Recently, I needed a way to quickly update some bulk data based on a part number… I started by building 2 stored procedure that each updated different columns, but a few minutes later I realized I was going to need at least 2 more permutations of my update procedure since there were more columns combinations I needed to update. I didn’t want to maintain a bunch of SP for different types of updates, I just wanted a handy update SP that would update a column/field with data when I passed it in or ignored it if left it blank/null.

I came up with a T-SQL Optional Update Parameter solution, which is based on using the “COALESCE” function along with typed null values. Normally, you pass in a bunch of values and it returns the first non null value, but… There is a hidden gem, you can pass in typed null values and it will return null… Knowing this, I created the code below.

-- CREATE DEMO TABLE
CREATE TABLE [dbo].[Product](
	[PartNumber] [nvarchar](20) NULL,
	[Description] [nchar](20) NULL,
	[Comments] [nchar](20) NULL
) ON [PRIMARY];

GO

-- DEMO TABLE DATA
PRINT '';
PRINT 'INSERT SAMPLE DATA';
PRINT '---------------------------------------------------------- '
INSERT INTO [PRODUCT] VALUES (N'PART1', N'PART 1 Description', N'PART 1 Comment');

GO

-- CREATE DEMO PROCEDURE
CREATE PROCEDURE [dbo].[spProduct_Update]
	@PartNumber AS NVARCHAR(20),
	@Description AS NCHAR(20),
	@Comments AS NCHAR(20)
AS
BEGIN
	DECLARE @BEFORE AS NVARCHAR(200);
	DECLARE @AFTER AS NVARCHAR(200);

	SELECT @BEFORE = 'BEFORE: ' + [PartNumber] + '  |  ' + [Description] + '  |  ' + [Comments] FROM [Product] WHERE [PartNumber] = @PartNumber;

	UPDATE [Product]
		SET [Description] = COALESCE(@Description,[Description]),
			[Comments] = COALESCE(@Comments,[Comments])
	WHERE [PartNumber] = @PartNumber;

	SELECT @AFTER = ' AFTER: ' + [PartNumber] + '  |  ' + [Description] + '  |  ' + [Comments] FROM [Product] WHERE [PartNumber] = @PartNumber;

	PRINT @BEFORE;
	PRINT @AFTER;

END

GO

-- Fails : Both values are non-typed null values
PRINT '';
PRINT '';
PRINT 'FAIL = COALESCE(null,null)';  --2 non-typed null value

GO

DECLARE @MyField AS NVARCHAR(50);
SET @MyField = COALESCE(null, null);
PRINT @MyField;

GO

-- Pass : The second value is a typed null value
PRINT '';
PRINT '';
PRINT 'PASS = COALESCE(null,<typed null parameter>)';  --1 non-typed null value, 1 typed null value

GO

DECLARE @MyField AS NVARCHAR(50);
DECLARE @MyTypedParameter AS NVARCHAR(50);
SET @MyField = COALESCE(null, @MyTypedParameter);
PRINT @MyField;

GO

-- Using the COALESCE with a typed parameter to create an optional "column" update.
PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'NO UPDATES';
EXEC spProduct_Update 'PART1', null, null;

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE DESCRIPTION ONLY';
EXEC spProduct_Update 'PART1', 'PART 1 *** UPDATE DESCRIPTION ***', null;

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE COMMENTS ONLY';
EXEC spProduct_Update 'PART1', null, 'PART 1 *** UPDATE COMMENT ***';

PRINT '';
PRINT '---------------------------------------------------------- '
PRINT 'UPDATE DESCRIPTION & COMMENTS';
EXEC spProduct_Update 'PART1', '*** UPDATE BOTH ***', '*** UPDATE BOTH ***';

-- DELETE DEMO PROCEDURE
DROP PROCEDURE [dbo].[spProduct_Update];

GO

-- DELETE DEMO TABLE
DROP TABLE [dbo].[Product];

GO

In a nutshell, we can only use COALESCE with typed nullable values… this means COALESCE(null,null,null) will fail because null is not a defined type, but COALESCE(@nullvalue, @nullvalue, @nullvalue) will work since we had to declare a type of @nullvalue [e.g. DECLARE @nullvalue AS nvarchar(20)]

INSERT SAMPLE DATA
----------------------------------------------------------
(1 row(s) affected)

FAIL = COALESCE(null,null)
Msg 4127, Level 16, State 1, Line 3
At least one of the arguments to COALESCE must be a typed NULL.

PASS = COALESCE(null,<typed null parameter>)

----------------------------------------------------------
NO UPDATES

(1 row(s) affected)
BEFORE: PART1  |  PART 1 Description    |  PART 1 Comment
 AFTER: PART1  |  PART 1 Description    |  PART 1 Comment      

----------------------------------------------------------
UPDATE DESCRIPTION ONLY

(1 row(s) affected)
BEFORE: PART1  |  PART 1 Description    |  PART 1 Comment
 AFTER: PART1  |  PART 1 *** UPDATE DE  |  PART 1 Comment      

----------------------------------------------------------
UPDATE COMMENTS ONLY

(1 row(s) affected)
BEFORE: PART1  |  PART 1 *** UPDATE DE  |  PART 1 Comment
 AFTER: PART1  |  PART 1 *** UPDATE DE  |  PART 1 *** UPDATE CO

----------------------------------------------------------
UPDATE DESCRIPTION & COMMENTS

(1 row(s) affected)
BEFORE: PART1  |  PART 1 *** UPDATE DE  |  PART 1 *** UPDATE CO
 AFTER: PART1  |  *** UPDATE BOTH ***   |  *** UPDATE BOTH ***

Fishbowl Inventory C# .NET Intergration

We’re using a program at work called Fishbowl Inventory for managing our sample inventory.  The program is a full blown ISRP (Inventory, Shipping, Receiving, and Packing) solution that does everything from sales order entry to warehouse logistics.  It’s been pretty rock-solid (tho I do regular full backups/restores and database re-indexes).  As of today, we are still using version 4.7 (very old) because the product road map changed after version 5.0 ( I’m not sure who drove their road map, but I feel the went the wrong way on a bunch of changes and every change was followed with huge bug lists).  In a nutshell, Fishbowl is a Java application that uses a Firebird database to store it’s data.  Upon the release of version 4.x, Fishbowl released a new “integration” feature that allowed developers to write applications that could communicate with Fishbowl via XML.  This was a big “improvement”, since previously I had written a few applications directly against the DB that generates labels (another missing feature) with a Dymo thermal printer and ran into lots of problems because Fishbowl does not like DB SCHEMA changes (even very simple benign changes gave me and support many issues).

Fast forward 4 years and here I am, I need to get some data out of Fishbowl but I find the ODBC drivers very unstable and I get errors > 50% of the time when I try to pull data out of Fishbowl.  I also tried their C# SDK and also received a bunch of random errors (probably because it’s for version 2011.7 vs. 4.7).  Since I needed to get some data out, I did a quick google search and found somebody made a PHP wrapper called  fishconnect based on the SDK for version 2010.4.  It says it required Fishbowl Inventory 2010.4+ to work, but after reviewing the code (don’t have PHP server, so I didn’t test anything), I’m convinced that it will probably work for all previous versions back to v4.7.  After a few minutes reading the code, I figured I could make myself a nice C# wrapper to use on our ASP.NET intranet site.

Here is a quick summary of what I used to put this together….

1. Have XSD file, now generate me some objects!

I grabbed the “fbimsg.xsd” from fishbowlconnect PHP project, which is also provided with the official Fishbowl Java SDK.

Microsoft has had the “XSD.EXE” for a long time, it will generate your objects based on a XSD file.  The command was moved into the Visual Studio Command Prompt with VS2010.  To use the XSD tool, go to your Microsoft Visual Studio 2010 folder and launch “Visual Studio Command Prompt (2010)”.  Once your at a command prompt, type the following.

xsd <location of XSD>.xsd /s

Presto, you got a nice DAL that is strongly typed!!!

2. Communications with Fishbowl Requires a Big Endian Stream

Why MS only supports Little Endian streams out of the box is a mystery to me, maybe a push to use the MS stack?  I don’t know, but regardless you have to solve this problem to talk with Fishbowl.  I did a quick and dirty sample to get it working, but it was a terrible “permanent” solution and required “Allow unsafe code” to be enabled.  I ran to StackOverflow to find somebody smarter with better code and ended up grabbing what I needed from Jon Skeet’s MiscUtil Library.  With this, I now had full support for BigEndian data streams.

3. Approving Fishbowl Integrated Applications

Fishbowl Inventory requires you to approve all 3rd party applications in the Fishbowl Server GUI, before the become active.  If you are running Fishbowl as a service you’ll have to stop it and then run it interactively. Once started, go to “Integrated Applications” and you should see Fishbowl Connect C# listed, select and check the green check mark to approve.

Fishbowl Inventroy Integration - Approve
Fishbowl Server Integrated Applications – Approval Screen

4. Putting it all together and a working example.

The final product consists of a single class called “FishbowlSession” that implements IDisposable.  This class handles all the communications with fishbowl. In addition to this class we have a DAL that was created from our XSD file and a Utilities library to provide a few helpers.  I wrote the main class by writing NUnit tests, if you want to test against your Fishbowl Server then change values in [Setup] test to reference valid data for your server (e.g. Server IP, Login, Password, Part Numbers).

If you find bugs please let me know, I only tested sending/receiving 4 message types so I’m not sure if all message types will work.

Example showing how to get the inventory for a part in Fishbowl.

private static string GetFishbowlPartInventory(string part)
{
	string inventoryResults = "Fishbowl Inventory Offline!";

	using (FishbowlSession fishbowlInventory = new FishbowlSession("192.168.168.168"))
	{
		// Connect to fishbowl using a valid login/password
		fishbowlInventory.Connect("app","app");

		// Make sure we are authenticated before we try to send/receive messages
		if (fishbowlInventory.IsAuthenticated)
		{
			// Build PartTagQueryRqType Request
			PartTagQueryRqType partTagQueryRqType = new PartTagQueryRqType { LocationGroup = "Product", PartNum = part };

			// Submit Request and get Response
			PartTagQueryRsType partTagQueryRsType = fishbowlInventory.IssueRequest&lt;PartTagQueryRsType&gt;(partTagQueryRqType);

			// Unregister Part, show custom message
			if (partTagQueryRsType.statusCode != "1000")
			{
				inventoryResults = Utilities.StatusCodeMessage(partTagQueryRsType.statusCode);
			}

			// Tag object is optional, if there is no tag element you can not get the quantity
			if (partTagQueryRsType.Tag != null &amp;&amp; partTagQueryRsType.Tag.Length &gt; 0)
			{
				Tag tag = partTagQueryRsType.Tag[0];

				inventoryResults = int.Parse(tag.Quantity) &gt; 0
									   ? tag.Quantity + " Available in Fishbowl Inventory."
									   : "No Inventory Available in Fishbowl Inventory.";
			}
		}
	}

	return inventoryResults;
}

IMPORTANT NOTE: There is an important caveat to this solution… Communication with the server will consume one of your seats that you have available based on your license key. This means, if your key only allows for 5 concurrent users and your app tries to connect when all the seats are full… it will fail! I have put a lot of other “things” in place to prevent this in my setup; short session timeouts, terminal server user caps, etc… but it’s still possible…

Download Fishbowl Inventory C# Wrapper

jQuery Validation – Show & Focus on first error only!

Have you ever wanted to show one error message at a time, or create a different type of visual queue for a specific invalid element on a form? Using the validate options, you can easily create any effect you want.

The example below, is designed to show the first error message in a form and to set focus on that first element. The message is displayed in a basic JavaScript alert box, not very elegant but it makes for a simple demo.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title>Show JS Alert - First Validation Error</title>
    <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script>
    <script src="Scripts/jquery.validate.js" type="text/javascript"></script>
    <script type="text/javascript">
        $(document).ready(function() {
            $("#elForm").validate({
                onfocusout: false,
                invalidHandler: function(form, validator) {
                    var errors = validator.numberOfInvalids();
                    if (errors) {
                        alert(validator.errorList[0].message);
                        validator.errorList[0].element.focus();
                    }
                },
                rules: {
                    TextBox1: { required: true, minlength: 2 },
                    TextBox2: { required: true, minlength: 2 },
                    TextBox3: { required: true, minlength: 2 }
                },
                messages: {
                    TextBox1: { required: "TextBox1 Required", minlength: "TextBox1 MinLength" },
                    TextBox2: { required: "TextBox2 Required", minlength: "TextBox2 MinLength" },
                    TextBox3: { required: "TextBox3 Required", minlength: "TextBox3 MinLength" }
                },
                errorPlacement: function(error, element) {
                    // Override error placement to not show error messages beside elements //
                }
            });
        });
    </script>

    <style type="text/css">
        label { width: 90px; display: block; float: left; }
        ul { list-style: none; }
        ul li { line-height: 1.8; }
    </style>
</head>
<body>
    <form id="elForm" action="#">
    <div>
        <h1>jQuery Validation - Show First Error Only</h1>
        <ul>
            <li><label>Text Box 1</label><input type="text" name="TextBox1" id="TextBox1" value="" /></li>
            <li><label>Text Box 2</label><input type="text" name="TextBox2" id="TextBox2" value="" /></li>
            <li><label>Text Box 3</label><input type="text" name="TextBox3" id="TextBox3" value="" /></li>
        </ul>
        <input type="submit" id="submit" value="Validate" />
    </div>
    </form>
</body>
</html>

Here are the important parts to focus on.

1. invalidHandler, this option give you access to the current validator and all the errors messages/items via the “validator.errorlist”.

invalidHandler: function(form, validator) {
	var errors = validator.numberOfInvalids();
	if (errors) {
		alert(validator.errorList[0].message);  //Only show first invalid rule message!!!
		validator.errorList[0].element.focus(); //Set Focus
	}
}

2. errorPlacement, this function controls how error messages are displayed. Since we don’t want any error messages displayed by default, we can override the default errorPlacement with a empty method call.

errorPlacement: function(error, element) {
	// Override error placement to not show error messages beside elements //
}

The end result is a alert box showing the first broken rule for the first invalid element on the page. In the example each input has 2 rules (required and minLength), if you enter a single character into the first text box the message will change from “TextBox1 Required” to “TextBox1 MinLength”.

Download JSON to CSV using JavaScript

Here is a simple way to convert your JSON to CSV, if your using FF or Chrome that supports data URI’s then this function convert your data and tell your browser to download the results as a text file. If you are using Internet Explorer (“IE”), a new window will popup so you can copy & paste or to use File -> Save As to download the results as text. The JSON parsing is done with json.org’s open source JSON parser, which you can download here.

Why can’t IE download the file to CSV like FireFox? The long answer is due to IE’s data URI support that was missing until IE8, and when implemented in IE8+, the security restrictions are set to prevent solutions like this from working! To force a download of your JSON data in CSV via IE, you’ll need to use an alternative technique like Server Side Parsing.

function DownloadJSON2CSV(objArray)
{
	var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;
	var str = '';

	for (var i = 0; i < array.length; i++) {
		var line = '';
		for (var index in array[i]) {
			if(line != '') line += ','

			line += array[i][index];
		}

		str += line + '\r\n';
	}

	if (navigator.appName != 'Microsoft Internet Explorer')
	{
		window.open('data:text/csv;charset=utf-8,' + escape(str));
	}
	else
	{
		var popup = window.open('','csv','');
		popup.document.body.innerHTML = '<pre>' + str + '</pre>';
	}
}

WordPress 3.0 – A CMS Solution?

I started working on a project site that is going to use WordPress as the CMS engine. I wasn’t sure if this was going to work, but after a little time I found this was pretty easy to do. You start by deleting all the default content and disabling the default plug-ins and management screen options. Once everything is removed/disabled, you start by creating pages (e.g. Home, About, News, etc…) that will display all the content you need on the website. One you have your main page setup (e.g. Home), you need to change the default home page from a dynamic page of blog posts to a static page. You change the default home page under Settings –> Reading –> Front Page Display –> A Static Page –> Choose an option from the Front Page drop down list. That’s it, you now have WordPress running as a basic CMS soltuion.

I wish I could say that’s it, but most likely you’ll need to download and install some additional plug-ins. I found many plug-ins still do not support WordPress 3.0, so you’ll need to be careful! If you happen to install a bad plug-in, there is an easy fix to get you site back online… Delete the plug-in folder (via FTP or Hosting Control Panel) and WordPress will automatically disable the plug-in.

Here is the list of plug-ins I added for my project:

  1. Contact Form 7
    This is a great plug-in, it provides a easy way to create forms that get emailed to you. You create the forms using HTML online and a shortcode style syntax for the input fields. The shortcode inputs are then accessible in your email, that you also create using HTML.
  2. WP Google Maps
    This is a very basic Google Maps plug-in, the is about 10 options and the only think extra you need is a GoogleMaps API key.
  3. WordPress.com Stats
    I’ve been using this on my blog and love the UI. I wish this was all being done client-side, but for some reason it was created as service. By default it injects a smiley face into your page, so make sure you disable this in your themes style sheet.

The only other thing I added was a SSL plug-in that I created based on 3 other plug-ins. I created my own, so I could force WP to load specific pages as HTTPS and then default back to HTTP if it was not explicitly set to support SSL. I found most of the default plug-ins will leave the user browsing in SSL after they visit a SSL page, I didn’t want this and my plug-ins solves this problem.

<?php
/*
Plugin Name: WPSSL Force SSL
Plugin URI:
Description: This plug-in is based on WPSSL, forcessl and various other posts/comments I found while searching for a soltuion.  The plug-in is used by adding the meta tag "force_ssl" with any value to any pages where you want SSL ("HTTPS") enabled.  If the page does NOT have this set and your not looking at an admin page (you can enable SSL for admin/login page in wpconfig.php), then display as HTTP.  This prevent links on the SSL page from displaying as SSL for non-secure pages.  This plug-in was tested on WordPress 3.0.
Author: Zachary Hunter (based on Austin Web Development)
Version: 1.0
Author URI: 

*/

function wpssl_forcessl()
{
	global $post;

    $post_id = $post;

    if (is_object($post_id))
    {
		$post_id = $post_id->ID;
	}

    $force_ssl  = get_post_meta($post_id, 'force_ssl');

    if(!empty($force_ssl))
    {
    	if(!stristr($_SERVER['REQUEST_URI'], 'wp-admin')) {
			if($_SERVER["HTTPS"] != "on") {
				$newurl = "https://" . $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"];
            	header("Location: $newurl");
            	exit();
        	}
     	}
    } else {
		if(!stristr($_SERVER['REQUEST_URI'], 'wp-admin')) {
			if($_SERVER["HTTPS"] == "on") {
				$newurl = "http://" . $_SERVER["SERVER_NAME"] . $_SERVER["REQUEST_URI"];
				header("Location: $newurl");
				exit();
			}
	}	}
}

add_action('wp', 'wpssl_forcessl');
?>

To use the plug-in follow these steps.

1. Copy the code above into a file in your plugin folder (e.g. /wp-content/plugins/wpssl/wpssl.php).
2. Go to the Plugins -> Plugins and enable “WPSSL Force SSL”.
2. Go to the Page you want to force SSL and open the “Custom Fields” section.
3. Click “Add Custom Field”, and use the settings below (name = force_ssl, value = true).
** DONE **

Improved NPOI ExportToExcel Function

I was looking over my generic export DataTable to Excel function the other day and noticed an issue. My default method was throwing everything into a string format, which was preventing users from highlighting number columns for subtotals. To fix the problem they could use “Text to Columns”, but this was becoming a common complaint I wanted to resolve. I came up with an improved “ExportToExcel” function that will insert the correct data type.

/// <summary>
///   Render DataTable to Excel File
/// </summary>
/// <param name = "sourceTable">Source DataTable</param>
/// <param name = "fileName">Destination File Name</param>
public static void ExportToExcel(DataTable sourceTable, string fileName)
{
	HSSFWorkbook workbook = new HSSFWorkbook();
	Sheet sheet = workbook.CreateSheet("Sheet1");
	Row headerRow = sheet.CreateRow(0);

	// Create Header Style
	CellStyle headerCellStyle = workbook.CreateCellStyle();
	headerCellStyle.FillForegroundColor = HSSFColor.GREY_25_PERCENT.index;
	headerCellStyle.FillPattern = FillPatternType.SOLID_FOREGROUND;

	// Create Date Style
	CellStyle dateCellStyle = workbook.CreateCellStyle();
	dateCellStyle.DataFormat = HSSFDataFormat.GetBuiltinFormat("m/d/yy");

	// Build Header
	foreach (DataColumn column in sourceTable.Columns)
	{
		Cell headerCell = headerRow.CreateCell(column.Ordinal);
		headerCell.SetCellValue(column.ColumnName);
		headerCell.CellStyle = headerCellStyle;
	}

	// Build Details (rows)
	int rowIndex = 1;
	int sheetIndex = 1;
	const int maxRows = 65536;

	foreach (DataRow row in sourceTable.Rows)
	{
		// Start new sheet max rows reached
		if (rowIndex % maxRows == 0)
		{
			// Auto size columns on current sheet
			for (int h = 0; h < headerRow.LastCellNum; h++)
			{
				sheet.AutoSizeColumn(h);
			}

			sheetIndex++;
			sheet = workbook.CreateSheet("Sheet" + sheetIndex);
			Row additionalHeaderRow = sheet.CreateRow(0);

			for (int h = 0; h < headerRow.LastCellNum; h++)
			{
				Cell additionalHeaderColumn = additionalHeaderRow.CreateCell(h);
				additionalHeaderColumn.CellStyle = headerRow.GetCell(h).CellStyle;
				additionalHeaderColumn.SetCellValue(headerRow.GetCell(h).RichStringCellValue);
			}

			rowIndex = 1;
		}

		// Create new row in sheet
		Row dataRow = sheet.CreateRow(rowIndex);

		foreach (DataColumn column in sourceTable.Columns)
		{
			Cell dataCell = dataRow.CreateCell(column.Ordinal);

			switch (column.DataType.FullName)
			{
				case "System.String":
					dataCell.SetCellValue(row[column].ToString());
					break;
				case "System.Int":
				case "System.Int32":
				case "System.Int64":
				case "System.Double":
				case "System.Decimal":
					double val;
					dataCell.SetCellValue(Double.TryParse(row[column].ToString(), out val) ? val : 0);
					break;
				case "System.DateTime":
					DateTime dt = new DateTime(1900, 01, 01);
					DateTime.TryParse(row[column].ToString(), out dt);

					dataCell.SetCellValue(dt);
					dataCell.CellStyle = dateCellStyle;
					break;
				default:
					dataCell.SetCellValue(row[column].ToString());
					break;
			}
		}

		rowIndex++;
	}

	for (int h = 0; h < headerRow.LastCellNum; h++)
	{
		sheet.AutoSizeColumn(h);
	}

	ExportToExcel(workbook, fileName);
}

The key part of the function above to review is the “switch (column.DataType.FullName)” code block. This grabs the DataTable’s column data type to use in the SetCellValue() call.

Once the workbook is built, we call a overload of the same function that expects a NPOI workbook. This overload will send a XLS file back to the user via their web browser using the HttpResponse stream.

/// <summary>
/// Render Excel File to HttpResponse (Browser)
/// </summary>
/// <param name="workbook">NPOI Workbook</param>
/// <param name="fileName">Destination File Name</param>
public static void ExportToExcel(HSSFWorkbook workbook, string fileName)
{
	using (MemoryStream memoryStream = new MemoryStream())
	{
		workbook.Write(memoryStream);
		memoryStream.Flush();

		try
		{
			HttpResponse response = HttpContext.Current.Response;
			response.ClearContent();
			response.ClearHeaders();
			response.Buffer = true;
			response.ContentType = "application/vnd.ms-excel";
			response.AddHeader("Content-Length", memoryStream.Length.ToString());
			response.AddHeader("Content-Disposition", string.Format("attachment;filename={0}", fileName));
			response.BinaryWrite(memoryStream.GetBuffer());
			response.Flush();
			response.End();
		}
		catch
		{
			// Do nothing, error expected due to Flush();
		}
	}
}

I’ve used the above function to send files with 3-4 sheets full of data back to the browser… but the file size is usually “VERY LARGE”. To work around this, I have a helper function which will add the excel file to a zip (in memory) before sending back to the client.

In my last test, a 17.5MB document with 2 sheets of data was compressed to 3.5MB with my zip utility. There was still a ~13 second delay to generate the XLS file, but once prompted to download form the browser the file download completed in a few seconds.

jQuery Validation with ASP.NET

Over the past few weeks I’ve been working to migrate away from the Ajax Control Toolkit & various other ASP.NET built-in controls. My goal has been to get every page of our intranet portal down around 100K, with a maximum load time of < 1 second. I’ve made a lot of progress, but after cleaning up all the basic stuff (excess view state, duplicate references, compressed JS/CSS, sprites, etc…), I had to start looking for additional items…

Using Fiddler and FireBug, I started looking for resources that were increasing my download sizes and found that the ASP.NET validation controls were generating ~10K each time they were used in a module. Since my portal consists of injecting multiple user controls into a page, this number would vary page-by-page based on the number of modules that used the library on one page. Since I was using jQuery for all my scripting, I decided to implement jQuery Validation Plugin as a replacement for the built-in ASP.NET validation controls.

The idea was simple enough, but since I’m living in ASP.NET Web Form world (aka: One form per/page) this was a bit more difficult to implement.

Issue #1 – jQuery Validation by FIELDSET

As noted, the default setup of the validation plug in is to validate a FORM which isn’t possible with ASP.NET Web Forms. To get around this, I decided I was going to wrap all my forms in a FIELDSET with the class “validationGroup”. Using this class as my selection, I’m able to have multiple groups of controls wrapped in a FIELDSET validated separately.

To get started, you need to create a custom validation function that will recursively move up the elements/groups to find a FIELDSET with the class “validationGroup”, and then check all the input elements against the rules assigned.

Here is my custom validation function:

function ValidateAndSubmit(evt) {
    var isValid = true;

    // Get Validator & Settings
    var validator = $("#aspnetForm").validate();
    var settings = validator.settings;

    // Find the parent control that contains the elements to be validated
    var $group = $(evt.currentTarget).parents('.validationGroup');

    // Grab all the input elements (minus items listed below)
    $group
        .find(":input")
        .not(":submit, :reset, :image, [disabled]")
        .not(settings.ignore)
        .each(function (i, item) {
            // Don't validate items without rules
            if (!validator.objectLength($(item).rules()))
                return true;

            if (!$(item).valid())
                isValid = false;
        });

    // If any control is the group fails, prevent default actions (aka: Submit)
    if (!isValid)
        evt.preventDefault();
}

Next, I need to trigger the validation for the current FIELDSET I’m working in based on on some action… like the click of a submit button. Since the default setting is to validation on submit, you need to disable this and register your ValidateAndSubmit function call when your event is triggered.

// Disable default/auto binding of all buttons
$("#aspnetForm").validate({
	onsubmit: false,
	ignore: ':hidden',
	focusInvalid: true
});

// Bind the ASP.NET button with the ID "Search" to the "ValidateAndSubmit" custom validation function.
$('#<%= Search.ClientID %>').click(ValidateAndSubmit);

That’s it, with those three little little steps (Custom Validate Method, Disable Default/Auto Bind, Manual bind button to custom event), you now have a working validate by FIELDSET (or any other grouping control) solution.

Issue #2 – How to validate a group of ASP.NET Check Boxes

By default, ASP.NET will render a CheckBox or a CheckBoxList with an auto generated unique name attribute. This becomes a problem when you want to validate the group, since you can’t set a common name attribute for all the elements.

Here is an example of the HTML that would be rendered by a CheckBoxList with a layout of “Flow”.

<span id="CheckBoxGroup">
  <input id="cbk1" name="cbk1" type="checkbox">Check Box 1</input>
  <input id="cbk2" name="cbk2" type="checkbox">Check Box 2</input>
  <input id="cbk3" name="cbk3" type="checkbox">Check Box 3</input>
</span>
<span class="jQueryValError"></span>

My solution to the problem required a custom validator and a custom errorPlacement function, for the jQuery validaiton plug-in.

First, I added a custom validator that looked at a group of check boxes that are inside a wrapping “Parent()” element. I then count the number of checked check boxes to make sure the limit required (e.g. Must check at least 3 check boxes) is greater than the minimum we set.

jQuery.validator.addMethod("minChecked",
 function(value, element, param) {
	var $p = $(element).parent();
	var selected = $p.children('input[type=checkbox]:checked').length;

	if (selected >= param) {
		$p.children().removeClass('error');
		$p.siblings('.error').remove();
		return true;
		}

	$p.children('input[type=checkbox]').addClass('error');

In order to hookup this custom event to all the controls inside of a wrapping “Parent()” control, we use jQuery’s $().each function to call the validator’s rules() function.

$(".checkBoxGroup").children('input[type=checkbox]').each(function(){ $(this).rules("add", { minChecked: 3}); });

Second, our plug-in will by default put an error message next to the control(s) that triggered the error. I over ride the default behavior, we setup a “errorPlacement” function to put one error beside the wrapping “Parent()” control.

$("#<%= form1.ClientID %>").validate({
	errorPlacement: function(error, element){
		if(element.rules().minChecked > 0) {
			var $p = $(element).parent();
			if($p.siblings().hasClass("error") == false) {
				error.insertAfter($p);
			}
		}
		else {
			error.insertAfter(element);
		}
	}
});

Here is what the form looks like when you have a error with your check boxes.

jQuery Validation of ASP.NET CheckBoxes

Here is source code used to create the example above.

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title>Contact Form Demo</title>

    <script src="styles/jQuery.js" type="text/javascript"></script>

    <script src="styles/jQuery.Validate.js" type="text/javascript"></script>

    <script src="styles/jQuery.Validate.AddOns.js" type="text/javascript"></script>

    <script type="text/javascript">

    jQuery.validator.addMethod("minChecked",
     function(value, element, param) {
        var $p = $(element).parent();
        var selected = $p.children('input[type=checkbox]:checked').length;

        if (selected >= param) {
            $p.children().removeClass('error');
            $p.siblings('.error').remove();
            return true;
            }

        $p.children('input[type=checkbox]').addClass('error');

        return false;}, jQuery.validator.format("Please check at least {0} items.")); 

        $(document).ready(function(){
            $("#<%= form1.ClientID %>").validate({
                rules: {
                    <%= FirstName.UniqueID %>: { required: true },
                    <%= LastName.UniqueID %>: { required: true },
                    <%= Email.UniqueID %>: { required: true, email: true },
                    <%= Phone.UniqueID %>: { required: true, phoneUS: true }
                },
                errorPlacement: function(error, element){
                    if(element.rules().minChecked > 0) {
                        var $p = $(element).parent();
                        if($p.siblings().hasClass("error") == false) {
                            error.insertAfter($p);
                        }
                    }
                    else {
                        error.insertAfter(element);
                    }
                }
            });

            $(".checkBoxGroup").children('input[type=checkbox]').each(function(){ $(this).rules("add", { minChecked: 3}); });

            $(".myGroupRandom").children('input[type=checkbox]').each(function(){ $(this).rules("add", { minChecked: 1}); });
        });
    </script>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <fieldset>
            <ol>
                <li>
                    <label class="left">
                        First Name
                    </label>
                    <input type="text" id="FirstName" runat="server" /></li>
                <li>
                    <label class="left">
                        Last Name
                    </label>
                    <input type="text" id="LastName" runat="server" /></li>
                <li>
                    <label class="left">
                        Email
                    </label>
                    <input type="text" id="Email" runat="server" /></li>
                <li>
                    <label class="left">
                        Phone
                    </label>
                    <input type="text" id="Phone" runat="server" /></li>
                <li>
                    <label class="left">
                        Contact Method
                    </label>
                    <span class="checkBoxGroup">
                        <input type="checkbox" id="ReqEmail" runat="server" /><label>Email</label>
                        <input type="checkbox" id="ReqMail" runat="server" /><label>Mail</label>
                        <input type="checkbox" id="ReqPhone" runat="server" /><label>Phone</label>
                        <input type="checkbox" id="ReqNoContact" runat="server" /><label>No Contact</label>
                    </span></li>
                <li>
                    <label class="left">
                        New Letter Type
                    </label>
                    <span class="myGroupRandom" >
                        <input type="checkbox" id="Checkbox1" runat="server" /><label>Company News</label>
                        <input type="checkbox" id="Checkbox2" runat="server" /><label>Press Releases</label>
                        <input type="checkbox" id="Checkbox3" runat="server" /><label>Deals</label>
                        <input type="checkbox" id="Checkbox4" runat="server" /><label>Employement</label>
                    </span></li>
                <li>
                    <input type="submit" id="Submit" value="Submit" /></li>
            </ol>
        </fieldset>
    </div>
    </form>
</body>
</html>

NPOI – Set Cell Helper

I saw a discussion posting on the NPOI discussion forum on CodePlex today, asking if there was a function like SetCell(X,Y,Value) in NPOI. Unfortunately there isn’t… At least I was never able to find one, so I created my own. Since I only needed to set values I added three basic ones into my helper class to make available to everybody using NPOI in my project. I was really tempted to add these to the HSSFWorksheet, but to keep my code save I figured a helper class for all my extras would be enough. Here is my version of a SetCellValue() helper set of functions.

private static void SetCellValue(HSSFSheet worksheet, int columnPosition, int rowPosition, DateTime value)
{
	// Get row
	using (HSSFRow row = worksheet.GetRow(rowPosition))
	{
		// Get or Create Cell
		using (HSSFCell cell = row.GetCell(columnPosition) ?? row.CreateCell(columnPosition))
		{
			cell.SetCellValue(value);
			cell.CellStyle.DataFormat = 14;
		}
	}
}

private static void SetCellValue(HSSFSheet worksheet, int columnPosition, int rowPosition, double value)
{
	// Get row
	using (HSSFRow row = worksheet.GetRow(rowPosition))
	{
		// Get or Create Cell
		using (HSSFCell cell = row.GetCell(columnPosition) ?? row.CreateCell(columnPosition))
		{
			cell.SetCellValue(value);
		}
	}
}

private static void SetCellValue(HSSFSheet worksheet, int columnPosition, int rowPosition, string value)
{
	// Get row
	using (HSSFRow row = worksheet.GetRow(rowPosition))
	{
		// Get or Create Cell
		using (HSSFCell cell = row.GetCell(columnPosition) ?? row.CreateCell(columnPosition))
		{
			cell.SetCellValue(value);
		}
	}
}

// Set Date
SetCellValue(sheet, 9, 3, DateTime.Now);
// Set Number
SetCellValue(sheet, 9, 4, 100.01);
// Set Text
SetCellValue(sheet, 9, 5, "Zach Roxs!");

As you can see it’s pretty easy to create a SetCellValue() helper. I plan to create another version of these that uses Excel coordinates (e.g. A5, Z10, etc…), so my die hard Excel teammates can use their native Excel mapping syntax!

NPOI – Copy Row Helper

Another day and another little tidbit on using NPOI. I was doing tool mock-up at work today when I ran across a need for a copy row function. After searching high and low, I realized NPOI does not currently offer this capability. After looking around (Google, NPOI and POI threads) I decided to create my own helper function. I’m sure there might be a few things I missed in my routine since the library is a bit new to me, but after testing this against a bunch of different scenarios I’m pretty confident this will work for 99% of my needs and maybe a high percent of yours as well.

Here is the function in all it’s glory, I thought about modify the NPOI source but since I’m not sure where it’s going I figured I’d just add this in my own little NPOI.CustomHelpers class that I can use with my NPOI project.

/// <summary>
/// HSSFRow Copy Command
///
/// Description:  Inserts a existing row into a new row, will automatically push down
///               any existing rows.  Copy is done cell by cell and supports, and the
///               command tries to copy all properties available (style, merged cells, values, etc...)
/// </summary>
/// <param name="workbook">Workbook containing the worksheet that will be changed</param>
/// <param name="worksheet">WorkSheet containing rows to be copied</param>
/// <param name="sourceRowNum">Source Row Number</param>
/// <param name="destinationRowNum">Destination Row Number</param>
private void CopyRow(HSSFWorkbook workbook, HSSFSheet worksheet, int sourceRowNum, int destinationRowNum)
{
	// Get the source / new row
	HSSFRow newRow = worksheet.GetRow(destinationRowNum);
	HSSFRow sourceRow = worksheet.GetRow(sourceRowNum);

	// If the row exist in destination, push down all rows by 1 else create a new row
	if (newRow != null)
	{
		worksheet.ShiftRows(destinationRowNum, worksheet.LastRowNum, 1);
	}
	else
	{
		newRow = worksheet.CreateRow(destinationRowNum);
	}

	// Loop through source columns to add to new row
	for (int i = 0; i < sourceRow.LastCellNum; i++)
	{
		// Grab a copy of the old/new cell
		HSSFCell oldCell = sourceRow.GetCell(i);
		HSSFCell newCell = newRow.CreateCell(i);

		// If the old cell is null jump to next cell
		if (oldCell == null)
		{
			newCell = null;
			continue;
		}

		// Copy style from old cell and apply to new cell
		HSSFCellStyle newCellStyle = workbook.CreateCellStyle();
		newCellStyle.CloneStyleFrom(oldCell.CellStyle); ;
		newCell.CellStyle = newCellStyle;

		// If there is a cell comment, copy
		if (newCell.CellComment != null) newCell.CellComment = oldCell.CellComment;

		// If there is a cell hyperlink, copy
		if (oldCell.Hyperlink != null) newCell.Hyperlink = oldCell.Hyperlink;

		// Set the cell data type
		newCell.SetCellType(oldCell.CellType);

		// Set the cell data value
		switch (oldCell.CellType)
		{
			case HSSFCellType.BLANK:
				newCell.SetCellValue(oldCell.StringCellValue);
				break;
			case HSSFCellType.BOOLEAN:
				newCell.SetCellValue(oldCell.BooleanCellValue);
				break;
			case HSSFCellType.ERROR:
				newCell.SetCellErrorValue(oldCell.ErrorCellValue);
				break;
			case HSSFCellType.FORMULA:
				newCell.SetCellFormula(oldCell.CellFormula);
				break;
			case HSSFCellType.NUMERIC:
				newCell.SetCellValue(oldCell.NumericCellValue);
				break;
			case HSSFCellType.STRING:
				newCell.SetCellValue(oldCell.RichStringCellValue);
				break;
			case HSSFCellType.Unknown:
				newCell.SetCellValue(oldCell.StringCellValue);
				break;
		}
	}

	// If there are are any merged regions in the source row, copy to new row
	for (int i = 0; i < worksheet.NumMergedRegions; i++)
	{
		CellRangeAddress cellRangeAddress = worksheet.GetMergedRegion(i);
		if (cellRangeAddress.FirstRow == sourceRow.RowNum)
		{
			CellRangeAddress newCellRangeAddress = new CellRangeAddress(newRow.RowNum,
																		(newRow.RowNum +
																		 (cellRangeAddress.FirstRow -
																		  cellRangeAddress.LastRow)),
																		cellRangeAddress.FirstColumn,
																		cellRangeAddress.LastColumn);
			worksheet.AddMergedRegion(newCellRangeAddress);
		}
	}

}

The code comments above should give you a good idea of what I’m doing, if something doesn’t make sense just ask. The key things I wanted to make sure got copied were; Cell Style, Cell Value, Cell Type, Merged Cell Settings. In the end I noticed a few other things that I thought I might use in the future, so I included them as well. Here is an example of how to call to CopyRow along with a snapshot of the end result.

// Grab my NPOI workbook memorystream
HSSFWorkbook workbook = new HSSFWorkbook(memoryStream);

// Grab my test worksheet
HSSFSheet sheet = workbook.GetSheet("Sheet1");

// Copy Excel Row 1 to Excel Row 3
CopyRow(workbook, sheet, 0, 2);

// Copy Excel Row 2 to Excel Row 4
CopyRow(workbook, sheet, 1, 3);
NPOI – Copy Row Helper Function Test Results

Using jQuery to prevent “Delete” mistakes

A few days ago I was reading a web design article that talked about the “Poka-Yoke” principle. It was such a funny phrase that I had to go to Wikipedia to figure out the definition. In a nutshell, the principle refers to the process of “idiot-proofing” human interaction with a process. This term is based on the Japanese words “yokeru” avoid and “poka” mistake, this is rather funny for me since I work for a Japanese company that has clearly never heard of this principle!

This term came to mind today while helping another programmer develop a system to prevent accidental in their UI. After discussing what needed to be done, it was clear that we needed to implement some client side logic to “prevent… guide…” the user into making the correct choices.

The UI was pretty straight forward, there will be a few buttons to do various actions on rows in a table. The users will select one or more rows for the table and then click the button of the action they want to perform. Sounds simple, right… We’ll it is and it’s also really easy to select the wrong row and/or the wrong button. Training is a big help in any process, but one you leave the users alone the system will be the only thing enforcing the rules.

Here is an example of a table where rows will be selected for deletion. I used a “asp:Table” to simulate the markup for a table that will be generated with a “asp:GridView”. In my past 5 years of working with ASP.NET, this was the first time I ever used or seen this control used.

<div class="mediumTable">
	<asp:Button ID="DeleteButton" runat="server" Text="Delete" />
	<asp:Table ID="Table1" runat="server">
		<asp:TableHeaderRow>
			<asp:TableHeaderCell>Chk</asp:TableHeaderCell>
			<asp:TableHeaderCell>Product</asp:TableHeaderCell>
			<asp:TableHeaderCell>Stock</asp:TableHeaderCell>
		</asp:TableHeaderRow>
		<asp:TableRow>
			<asp:TableCell>
				<asp:CheckBox ID="CheckBox1" runat="server" /></asp:TableCell>
			<asp:TableCell>Apples</asp:TableCell>
			<asp:TableCell>0</asp:TableCell>
		</asp:TableRow>
		<asp:TableRow>
			<asp:TableCell>
				<asp:CheckBox ID="CheckBox2" runat="server" /></asp:TableCell>
			<asp:TableCell>Oranges</asp:TableCell>
			<asp:TableCell>100</asp:TableCell>
		</asp:TableRow>
		<asp:TableRow>
			<asp:TableCell>
				<asp:CheckBox ID="CheckBox3" runat="server" /></asp:TableCell>
			<asp:TableCell>Pears</asp:TableCell>
			<asp:TableCell>150</asp:TableCell>
		</asp:TableRow>
		<asp:TableRow>
			<asp:TableCell>
				<asp:CheckBox ID="CheckBox4" runat="server" /></asp:TableCell>
			<asp:TableCell>Watermelon</asp:TableCell>
			<asp:TableCell>2</asp:TableCell>
		</asp:TableRow>
	</asp:Table>
</div>
</form>

Here is the delete “prevention” system we came up. We had a few other ideas like, but after doing a few demos of each solution we decided on the following code that will be discussed below.

$(document).ready(function() {
	$('#<%= DeleteButton.ClientID %>').click(function() {
		// Find all selected items
		var CheckedCheckBoxes = $('#<%= Table1.ClientID %>').find("input[type='checkbox']:checked");
		// If there are no selected items, exit
		if (CheckedCheckBoxes.length == 0) {
			alert("You must make a selection before you can delete an item.");
			return false;
		}
		// If there are selected items, loop
		CheckedCheckBoxes.each(function() {
			var isProcessed = $(this).parent().parent().find('td:last').text();
			// If current item is already processed, set flat to true
			if (jQuery.trim(isProcessed) > 0) {
				deletedProcessed = true;
			}
		});
		// Check flag, if true stop ALL deletes and notify user
		if (deletedProcessed) {
			alert("You can't delete a product with stock!");
			return false;
		}
		else {
			// Do a final confirmation the user wants to delete the selection
			var answer = confirm('Are you sure you want to delete your selection?');
			return answer
		}
	});
});

Since selection box was used for a few different functions (process orders, hold orders and edit orders) we needed a way to add a little extra checking to only the delete button. Our goals with the logic above was the following.

  1. Disables “Delete” if nothing has been selected.
  2. Disables “Delete” if the user selected a item should not be deleted (e.g. Has Inventory).
  3. Prompts the user to confirm the really want to delete the item.

In addition to these ideas, we also had the following ideas…

  1. Alert row # of items that don’t meet the business rule
  2. Highlight the row in “red” that don’t meet the business rule
  3. Physically “Disable” the delete button if they select an item that don’t meet the business rules

Mistakes are human nature and by double and triple checking what they are doing we can save the users and ourselves from many emails/phone calls/visits about how the system “deleted the data all by itself”.

Mistake proofing is such an easy idea to grasp and agree with, that it’s surprising that we don’t see more of it in the systems we using everyday. I’m sure I’ve developed a few bad ones, hopefully the audience using it was small and it didn’t mitigate itself into other things. Working for a Japanese company who has developed some of the worst internal tools I’ve ever seen (0 UI design effort), I’m sure to bring up the new Japanese “Poka-Yoke” principle that next time I talk to one of the developers in Japan.

How to read in XLSX data for editing with NPOI

NPOI is a great solution for working with XLS documents, but what happens when you need to read in XLSX documents as part of your solution? The easiest solution I could think of was to use OLEDB to extract out all the data (the data, the whole data and nothing but the data) from a worksheet into a new NPOI document.

If you need more support than just reading in data from Excel 2007+ (XLSX), you should look at using the ExcelPackage project that is hosted at CodePlex.

The benefit to my approach, is that your able to use your existing NPOI solutions when data is provided in XLSX format. This is far from full Excel 2007+ support, but if your applicaitons only requirement is “reading in data” then your problem is solved. NPOI v1.6 is supposed to fully support the XLSX format, until then this function will provide basic XLSX support.

Functions used to convert XLSX document to a NPOI document
/// <summary>
/// Render a Excel 2007 (xlsx) Worksheet to NPOI Excel 2003 Worksheet, all excel formatting
/// from XLSX will be lost when converted.  NPOI roadmap shows v1.6 will support Excel 2007 (xlsx).
/// NPOI Roadmap  : http://npoi.codeplex.com/wikipage?title=NPOI%20Road%20Map&referringTitle=Documentation
/// NPOI Homepage : http://npoi.codeplex.com/
/// </summary>
/// <param name="excelFileStream">XLSX FileStream</param>
/// <param name="sheetName">Excel worksheet to convert</param>
/// <returns>MemoryStream containing NPOI Excel workbook</returns>
public static Stream ConvertXLSXWorksheetToXLSWorksheet(Stream excelFileStream, string sheetName)
{
	// Temp file name
	string tempFile = HttpContext.Current.Server.MapPath("~/uploads/" + HttpContext.Current.Session.LCID + ".tmp");

	// Temp data container (using DataTable to leverage existing RenderDataTableToExcel function)
	DataTable dt = new DataTable();

	try
	{
		// Create temp XLSX file
		FileStream fileStream = new FileStream(tempFile, FileMode.Create, FileAccess.Write);

		const int length = 256;
		Byte[] buffer = new Byte[length];
		int bytesRead = excelFileStream.Read(buffer, 0, length);

		while (bytesRead > 0)
		{
			fileStream.Write(buffer, 0, bytesRead);
			bytesRead = excelFileStream.Read(buffer, 0, length);
		}

		excelFileStream.Close();
		fileStream.Close();

		// Read temp XLSX file using OLEDB
		// Tested on Vista & Windows 2008 R2
		using (OleDbConnection con = new OleDbConnection(@"Provider=Microsoft.ACE.OLEDB.12.0; Extended Properties=Excel 12.0;Data Source=" + tempFile + @";Extended Properties=Excel 12.0;"))
		{
			con.Open();
			string sql = String.Format("SELECT * FROM [{0}$]", sheetName);
			OleDbDataAdapter da = new OleDbDataAdapter(sql, con);

			da.Fill(dt);
		}
	}
	finally
	{
		// Make sure temp file is deleted
		File.Delete(tempFile);
	}

	// Return a new POI Excel 2003 Workbook
	return RenderDataTableToExcel(dt);
}

/// <summary>
/// Render DataTable to NPOI Excel 2003 MemoryStream
/// NOTE:  Limitation of 65,536 rows suppored by XLS
/// </summary>
/// <param name="sourceTable">Source DataTable</param>
/// <returns>MemoryStream containing NPOI Excel workbook</returns>
public static Stream RenderDataTableToExcel(DataTable sourceTable)
{
	HSSFWorkbook workbook = new HSSFWorkbook();
	MemoryStream memoryStream = new MemoryStream();
	// By default NPOI creates "Sheet0" which is inconsistent with Excel using "Sheet1"
	HSSFSheet sheet = workbook.CreateSheet("Sheet1");
	HSSFRow headerRow = sheet.CreateRow(0);

	// Header Row
	foreach (DataColumn column in sourceTable.Columns)
		headerRow.CreateCell(column.Ordinal).SetCellValue(column.ColumnName);

	// Detail Rows
	int rowIndex = 1;

	foreach (DataRow row in sourceTable.Rows)
	{
		HSSFRow dataRow = sheet.CreateRow(rowIndex);

		foreach (DataColumn column in sourceTable.Columns)
		{
			dataRow.CreateCell(column.Ordinal).SetCellValue(row[column].ToString());
		}

		rowIndex++;
	}

	workbook.Write(memoryStream);
	memoryStream.Flush();
	memoryStream.Position = 0;

	return memoryStream;
}

NOTE: You could convert an entire workbook (multiple work sheets) into a new NPOI workbook if you looped over each work sheet in the XLSX document. Since all my tools are based on using a single work sheet, I’ll leave the workbook conversion up to you.

Example using XLSX function
if (FileUpload.PostedFile.ContentLength > 0)
{
	Stream uploadFileStream = FileUpload.PostedFile.InputStream;

	// If the file uploaded is "XLSX", convert it's Sheet1 to a NPOI document
	if (FileUpload.PostedFile.FileName.EndsWith("xlsx"))
	{
		uploadFileStream = ExcelHelper.ConvertXLSXWorksheetToXLSWorksheet(uploadFileStream, "Sheet1");
	}
}

** All your NPOI logic that adds columns, changes cell/row formatting, etc… will now work with the data extracted from the XLSX document.

Use NPOI to populate an Excel template

So I’ve been using NPOI all week and decide to do a quick “demo” for my team today. My demo was to show how to use NPOI to populate (update) an Excel template that includes various charts.  Even though NPOI does not support creating charts from scratch, it does support updating files that already include (hence template) charts. I started by going to the Microsoft website where they have a bunch of free “pretty” templates on, randomly choosing one with a bunch of formulas and charts.  It took me about an hour to build the complete demo using very simple and easy to read code. Most of the updates or just putting values in cells, but the actual process of opening/reading/inserting/saving a new or existing file in NPOI is very easy for a novice programmers.

Development Summary ( Step-by-Step )

  1. Get a template, I grabbed mine from here ( office.microsoft.com ).
    ** I only used the first sheet and deleted all the sample data
  2. Create a new ASP.NET 3.5 Web Application projecct
  3. Download NPOI binaries and include in your project
  4. Build a UI that will be used to populate your template.
    ** This could also be populated by a data sources (db, XML, etc..)
    ** NOTE:  I used Excel to create the form using Excel formulas
  5. Add some c# code “magic” to load data into the template using NPOI

Sounds simple because it is… Here is the code to add the form contents into the Excel template.

// Open Template
FileStream fs = new FileStream(Server.MapPath(@"\template\Template_EventBudget.xls"), FileMode.Open, FileAccess.Read);

// Load the template into a NPOI workbook
HSSFWorkbook templateWorkbook = new HSSFWorkbook(fs, true);

// Load the sheet you are going to use as a template into NPOI
HSSFSheet sheet = templateWorkbook.GetSheet("Event Budget");

// Insert data into template
sheet.GetRow(1).GetCell(1).SetCellValue(EventName.Value);  // Inserting a string value into Excel
sheet.GetRow(1).GetCell(5).SetCellValue(DateTime.Parse(EventDate.Value));  // Inserting a date value into Excel

sheet.GetRow(5).GetCell(2).SetCellValue(Double.Parse(Roomandhallfees.Value));  // Inserting a number value into Excel
sheet.GetRow(6).GetCell(2).SetCellValue(Double.Parse(Sitestaff.Value));
sheet.GetRow(7).GetCell(2).SetCellValue(Double.Parse(Equipment.Value));
sheet.GetRow(8).GetCell(2).SetCellValue(Double.Parse(Tablesandchairs.Value));
sheet.GetRow(12).GetCell(2).SetCellValue(Double.Parse(Flowers.Value));
sheet.GetRow(13).GetCell(2).SetCellValue(Double.Parse(Candles.Value));
sheet.GetRow(14).GetCell(2).SetCellValue(Double.Parse(Lighting.Value));
sheet.GetRow(15).GetCell(2).SetCellValue(Double.Parse(Balloons.Value));
sheet.GetRow(16).GetCell(2).SetCellValue(Double.Parse(Papersupplies.Value));
sheet.GetRow(20).GetCell(2).SetCellValue(Double.Parse(Graphicswork.Value));
sheet.GetRow(21).GetCell(2).SetCellValue(Double.Parse(Photocopying_Printing.Value));
sheet.GetRow(22).GetCell(2).SetCellValue(Double.Parse(Postage.Value));
sheet.GetRow(26).GetCell(2).SetCellValue(Double.Parse(Telephone.Value));
sheet.GetRow(27).GetCell(2).SetCellValue(Double.Parse(Transportation.Value));
sheet.GetRow(28).GetCell(2).SetCellValue(Double.Parse(Stationerysupplies.Value));
sheet.GetRow(29).GetCell(2).SetCellValue(Double.Parse(Faxservices.Value));
sheet.GetRow(33).GetCell(2).SetCellValue(Double.Parse(Food.Value));
sheet.GetRow(34).GetCell(2).SetCellValue(Double.Parse(Drinks.Value));
sheet.GetRow(35).GetCell(2).SetCellValue(Double.Parse(Linens.Value));
sheet.GetRow(36).GetCell(2).SetCellValue(Double.Parse(Staffandgratuities.Value));
sheet.GetRow(40).GetCell(2).SetCellValue(Double.Parse(Performers.Value));
sheet.GetRow(41).GetCell(2).SetCellValue(Double.Parse(Speakers.Value));
sheet.GetRow(42).GetCell(2).SetCellValue(Double.Parse(Travel.Value));
sheet.GetRow(43).GetCell(2).SetCellValue(Double.Parse(Hotel.Value));
sheet.GetRow(44).GetCell(2).SetCellValue(Double.Parse(Other.Value));
sheet.GetRow(48).GetCell(2).SetCellValue(Double.Parse(Ribbons_Plaques_Trophies.Value));
sheet.GetRow(49).GetCell(2).SetCellValue(Double.Parse(Gifts.Value));

// Force formulas to update with new data we added
sheet.ForceFormulaRecalculation = true;

// Save the NPOI workbook into a memory stream to be sent to the browser, could have saved to disk.
MemoryStream ms = new MemoryStream();
templateWorkbook.Write(ms);

// Send the memory stream to the browser
ExportDataTableToExcel(ms, "EventExpenseReport.xls");

Here are the screen shots of the form and completed template…

Here is demo UI.
NPOI - ASP.NET Form used to populate a Excel Template
Here is a populated template.
NPOI - ASP.NET Form merged with Excel template

Here is a copy of the XLS file created in the program:  NPOI – ASP.NET Form merged with Excel template

Here is a copy of the source code for the demo:  NPOI – Visual Studio 2008 (ASP.NET 3.5) Template Merge Demo

** There was a caveat during the development process.  During the demo I found a bug with NPOI opening my XLS file and I had to download the source code and make two changes to get NPOI to open my XLS template (bug states this issue is caused when you save a XLSX file as XLS in Excel 2007). Read this post to see the fix or use the DLLs for NPOI included in my demo project.

** NPOI is a active Open Source project, the bugs found have probably already been fixed and if you download the latest code from CodePlex, you’ll probably have no problems running the demo.

NPOI and the Excel 2003 row limit

I ran into a problem today, testing my new NPOI DataTable web export function… While running some large queries that resulted in 65K+ rows of data, my function blew up. As you know, Excel 2003 and the BIFF format only support 65,536 rows! To make sure this never happens again, I’ve added a little block of code around my details loop to create an additional sheet every time you reach row 65,536.

code excerpt based on using a DataTable as your data source
int rowIndex = 1;               // Starting Row (0 = Header)
int sheetIndex = 1;             // Starting sheet is always set to "Sheet1"
const int maxRows = 65536;      // Max rows p/sheet in Excel 2003

// Start loop of details to write to sheet
foreach (DataRow row in DataTableToExport.Rows)
{
	// Check if max rows hit, if so start new sheet and copy headers from current sheet.
	if(rowIndex % maxRows == 0)
	{
		// Auto size columns on current sheet
		for (int h = 0; h < headerRow.LastCellNum; h++)
		{
			sheet.AutoSizeColumn(h);
		}

		// Increment sheet counter
		sheetIndex++;

		// Create new sheet
		sheet = workbook.CreateSheet("Sheet" + sheetIndex);

		// Create header on new sheet
		HSSFRow additionalHeaderRow = sheet.CreateRow(0);

		// Copy headers from first sheet
		for (int h = 0; h < headerRow.LastCellNum; h++)
		{
			HSSFCell additionalHeaderColumn = additionalHeaderRow.CreateCell(h);
			additionalHeaderColumn.CellStyle = headerRow.GetCell(h).CellStyle;
			additionalHeaderColumn.SetCellValue(headerRow.GetCell(h).RichStringCellValue);
		}

		rowIndex = 1;
	}

	// Create new detail row in sheet
	HSSFRow dataRow = sheet.CreateRow(rowIndex);
	// Loop the columns from the DataRow and add using dataRow.CreateCell(#)....
}

In a nutshell, I create some counters before going into the detail row loop to track the Row and Sheet number. When I hit the max Row number number on a sheet, I create a new Sheet. To keep everything pretty, I copy the header row from the first sheet to the first row of the new sheet. The only output limitation now is the max sheets of 255.

Build beautiful XLS documents in .NET with NPOI

I’ve seen Excel used for a ton of different solutions; Flow Charts, Business Forms, EDI, Photo Album, etc… Regardless of how Excel is being used, it’s one of the most common business document formats I know.  We commonly refer to Excel as duct tape, in the office.  I’ve traditionally limited my web & Excel integration to the capabilities of the System.Data.OleDb namespace. This has been a great solution, and when you can’t buy 3rd party controls or run code in full trust, it’s a proven solution. This has solved 99% of my problems, but there’s always been a need to do a bit more than use Excel as a basic data source.  Fortunate for me, I found a trail of posts in StackOverflow that lead me to NPOI.  It is a .NET implementation of the Apache POI Project.  NPOI is a bit behind POI, but this amazing library gives you the ability to develop rich Excel with a easy to use API.

I’ve run into a few bugs since starting, but I found lots of posts on CodePlex where the project is hosted.  I’ve found that running 1.2.2 (alpha) solves a bunch of bugs I immediately found when running 1.2.1.  I downloaded the latest build from CodePlex, but it was still on a 1.2.1 branch and was missing some of the 1.2.2 fixes.

Now that we know what NPOI is going to provide us, lets go grab the files we’ll need to get started.

  1. NPOI.Util Basic assistant class library
  2. NPOI.POIFS OLE2 format read/write library
  3. NPOI.DDF Drawing format read/write library
  4. NPOI.SS Formula evaluation library
  5. NPOI.HPSF Summary Information and Document Summary Information read/write library
  6. NPOI.HSSF Excel BIFF format read/write library

To keep the demo easy to follow and provide some good ways to refactor NPOI into your existing project, I’ve taken a old function that would output a DataTable to a HTML formatted document with an Excel (xls) extension.  This solution has worked for years, but every time you open one of these XLS files you get the following message (“The file you are trying to open “”, is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now?“).  It’s a bit annoying and scary to the user, but this approach was used to solve the problem of Excel truncating leading zero’s in numeric data (e.g. 00002345).

Before NPOI Function (DataTable to HTML document saved with a XLS extension)
/// <summary>
/// Exports a DataTable as a HTML formatted table and forces the results to be downloaded in a .HTM file.
/// </summary>
/// <param name="dataTable">DataTable</param>
/// <param name="fileName">Output File Name</param>
static public void DataTableToHtmlXLS(DataTable dataTable, string fileName)
{
	//Add _response header
	HttpResponse response = HttpContext.Current.Response;
	response.Clear();
	response.ClearHeaders();
	response.AddHeader("content-disposition", string.Format("attachment;filename={0}.xls", fileName));
	response.ContentEncoding = Encoding.Default;
	response.Charset = String.Empty;
	response.ContentType = "text/HTML";

	DataGrid dataGrid = new DataGrid();
	dataGrid.EnableViewState = false;
	dataGrid.DataSource = dataTable;
	dataGrid.DataBind();

	StringWriter sw = new StringWriter();
	HtmlTextWriter hw = new HtmlTextWriter(sw);

	dataGrid.RenderControl(hw);
	response.Write(sw.ToString());

	response.End();
}
Error opening HTML file saved as XLS

** If you were to look at the the XLS file in notepad, you’d see the contents are HTML.

NPOI Function (DataTable to Excel)
/// <summary>
/// Render DataTable to Excel File
/// </summary>
/// <param name="sourceTable">Source DataTable</param>
/// <param name="fileName">Destination File name</param>
public static void ExportDataTableToExcel(DataTable sourceTable, string fileName)
{
	HSSFWorkbook workbook = new HSSFWorkbook();
	MemoryStream memoryStream = new MemoryStream();
	HSSFSheet sheet = workbook.CreateSheet("Sheet1");
	HSSFRow headerRow = sheet.CreateRow(0);

	// handling header.
	foreach (DataColumn column in sourceTable.Columns)
		headerRow.CreateCell(column.Ordinal).SetCellValue(column.ColumnName);

	// handling value.
	int rowIndex = 1;

	foreach (DataRow row in sourceTable.Rows)
	{
		HSSFRow dataRow = sheet.CreateRow(rowIndex);

		foreach (DataColumn column in sourceTable.Columns)
		{
			dataRow.CreateCell(column.Ordinal).SetCellValue(row[column].ToString());
		}

		rowIndex++;
	}

	workbook.Write(memoryStream);
	memoryStream.Flush();

	HttpResponse response = HttpContext.Current.Response;
	response.ContentType = "application/vnd.ms-excel";
	response.AddHeader("Content-Disposition", string.Format("attachment;filename={0}", fileName));
	response.Clear();

	response.BinaryWrite(memoryStream.GetBuffer());
	response.End();
}

** Pretty simple right?  In my production code this is actually split into multiple functions so I can reuse the DataTable to Excel code.  The output of this function is a Excel 2003 (XLS) BIFF.

Now that we are using NPOI to create “real” XLS documents, we can do something cool like add formatting to our header row.

// handling header.
foreach (DataColumn column in sourceTable.Columns)
{
	// Create New Cell
	HSSFCell headerCell = headerRow.CreateCell(column.Ordinal);

	// Set Cell Value
	headerCell.SetCellValue(column.ColumnName);

	// Create Style
	HSSFCellStyle headerCellStyle = workbook.CreateCellStyle();
	headerCellStyle.FillForegroundColor = HSSFColor.AQUA.index;
	headerCellStyle.FillPattern = CellFillPattern.SOLID_FOREGROUND;

	// Add Style to Cell
	headerCell.CellStyle = headerCellStyle;
}
Adding Color Headers to POI Excel File

There is a million more formatting options possible and by the time you read this you should find a few more example on my site.

Reset Network Password in C#

If you have a small portal at work and need to provide a way for people to reset their password(s) or have certain authorized people reset password(s), here is a easy solution. In addition to the password reset, I also unlock the account since they have probably locked the account from trying to log in with the wrong password a million times!

Before you start, you need to know a few things first. Use Microsoft’s Active Directory Users and Computers (“ADUC”) Administrator tool on a Domain Controller to find these facts.

  1. Domain Name (ADUC)
    e.g. microsoft.local and MICROSOFT
  2. Domain Controller Name (ADUC, look under Domain Controllers)
  3. User Container (e.g. Item where all your users are located, default is “Users”)

In the example below, the company local domain is “MICROSOFT.COM” and the pre-Windows 2000 name is “MICROSOFT”. My domain controller is called “SERVER1” and all my users are in the default “USERS” container.

// Import the following namespace
using System.DirectoryServices;

        /// <summary>
        /// Reset a user's domain password, using a domain administrators account.
        /// </summary>
        /// <param name="acct">User who needs a password reset.</param>
        /// <param name="username">Administrator User Name</param>
        /// <param name="password">Administrator Password</param>
        /// <param name="newpassword">User's new password</param>
        public void resetPassword(string acct, string username, string password, string newpassword)
        {

            try
            {
                // LDAP Connection String
                string path = "LDAP://SERVER1/OU=USERS,DC=MICROSOFT,DC=LOCAL";

                // Prefix to Add to User Name (e.g. MICROSOFT\Administrator)
                string Domain = "MICROSOFT\\";  //Pre-Windows 2000 Name

                // Establish LDAP Connection
                DirectoryEntry de = new DirectoryEntry(path, Domain + username, password, AuthenticationTypes.Secure);

                // LDAP Search Filter
                DirectorySearcher ds = new DirectorySearcher(de);
                ds.Filter = "(&(objectClass=user)(|(sAMAccountName=" + acct + ")))";

                // LDAP Properties to Load
                ds.PropertiesToLoad.Add("displayName");
                ds.PropertiesToLoad.Add("sAMAccountName");
                ds.PropertiesToLoad.Add("DistinguishedName");
                ds.PropertiesToLoad.Add("CN");

                // Execute Search
                SearchResult result = ds.FindOne();

                string dn = result.Properties["DistinguishedName"][0].ToString();

                DirectoryEntry uEntry = new DirectoryEntry("LDAP://" + dn, username, password);

                uEntry.Invoke("SetPassword", new object[] { newpassword });  //Set New Password
                uEntry.Properties["LockOutTime"].Value = 0;  //Unlock Account
                uEntry.CommitChanges();
                uEntry.Close();

            }
            catch (Exception e)
            {
                // Log Error
            }
        }

The function above does all the work, but this probably won’t work by default since IIS is normally run under a low privileged local account. In order to change somebody’s password you need to use Impersonate a Domain Administrator’s account to have this capability.

** Important note, if your admin accounts are stored with your user accounts then this code could be used to reset your admin password! This is a big NO-NO since it could effectively lock you out of the network. Consider putting your users in a different container/OU and setting the filter to only look in this one place!

Convert JSON to HTML using JavaScript

So you’ve just called your first REST service and have JSON array full of data… Where do you put it? I noticed lots of people using various plug-ins to create HTML tables by defining columns, data types, etc… but what happens when you don’t know this? Since I wanted to throw random JSON data into HTML tables, I created two helper functions that will iterate over the data to create the views.

This code below is an improvement to my ad-hoc JavaScript solution I created a few weeks ago for a ASP.NET project, link below. As you will see below, it’s pretty easy to render a HTML table from a object array using plain JavaScript. Since most results are in a Table or Detail layout, I created 2 functions to return the data in either format. I also added some optional parameters that you can set to control formatting. The Details View was designed to show a single row/object, the headers will display on the left and the data will display on the right (see second example).

// This function creates a standard table with column/rows
// Parameter Information
// objArray = Anytype of object array, like JSON results
// theme (optional) = A css class to add to the table (e.g. <table class="<theme>">
// enableHeader (optional) = Controls if you want to hide/show, default is show
function CreateTableView(objArray, theme, enableHeader) {
    // set optional theme parameter
    if (theme === undefined) {
        theme = 'mediumTable'; //default theme
    }

    if (enableHeader === undefined) {
        enableHeader = true; //default enable headers
    }

    // If the returned data is an object do nothing, else try to parse
    var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;

    var str = '<table class="' + theme + '">';

    // table head
    if (enableHeader) {
        str += '<thead><tr>';
        for (var index in array[0]) {
            str += '<th scope="col">' + index + '</th>';
        }
        str += '</tr></thead>';
    }

    // table body
    str += '<tbody>';
    for (var i = 0; i < array.length; i++) {
        str += (i % 2 == 0) ? '<tr class="alt">' : '<tr>';
        for (var index in array[i]) {
            str += '<td>' + array[i][index] + '</td>';
        }
        str += '</tr>';
    }
    str += '</tbody>'
    str += '</table>';
    return str;
}

// This function creates a details view table with column 1 as the header and column 2 as the details
// Parameter Information
// objArray = Anytype of object array, like JSON results
// theme (optional) = A css class to add to the table (e.g. <table class="<theme>">
// enableHeader (optional) = Controls if you want to hide/show, default is show
function CreateDetailView(objArray, theme, enableHeader) {
    // set optional theme parameter
    if (theme === undefined) {
        theme = 'mediumTable';  //default theme
    }

    if (enableHeader === undefined) {
        enableHeader = true; //default enable headers
    }

    // If the returned data is an object do nothing, else try to parse
    var array = typeof objArray != 'object' ? JSON.parse(objArray) : objArray;

    var str = '<table class="' + theme + '">';
    str += '<tbody>';

    for (var i = 0; i < array.length; i++) {
        var row = 0;
        for (var index in array[i]) {
            str += (row % 2 == 0) ? '<tr class="alt">' : '<tr>';

            if (enableHeader) {
                str += '<th scope="row">' + index + '</th>';
            }

            str += '<td>' + array[i][index] + '</td>';
            str += '</tr>';
            row++;
        }
    }
    str += '</tbody>'
    str += '</table>';
    return str;
}

Standard Table Example Usage:

$(document).ready(function() {
	$.ajax({
		type: "POST",
		url: "/SampleRestService",
		contentType: "application/json; charset=utf-8",
		dataType: "json",
		data: "{}",
		success: function(res) {
			$('#Results').append(CreateTableView(res)).fadeIn();
		}
	});
});
JSON to HTML Table - Standard View

Details View Example Usage:

$(document).ready(function() {
	$.ajax({
		type: "POST",
		url: "/SampleRestService",
		contentType: "application/json; charset=utf-8",
		dataType: "json",
		data: "{}",
		success: function(res) {
			$('#Results').append(CreateDetailView(res,"CoolTableTheme",true)).fadeIn();
		}
	});
});
JSON to HTML Table - Details View

JSON to HTML JavaScript Source Code

That’s it, just do you normal jQuery ajax call and you can put your JSON into a HTML Table. This is a great way to test your JSON objects. I’ve tested the JSON to HTML on various object(s) and the results have been pretty good. If you find you need advanced features when your building your HTML table, you should look at something like jqGrid that has paging, sorting, etc…

[Update 10/6/2010]
A lot of people have asked for a full working HTML demo using JSON strings. The link below is a demo showing how to use the helpers with various JSON strings. Let me know if this helps!

JSON to HTML – Complete HTML Demo Project

Show a List of Posts for a Category on your 404 page

Today is my second day of being back online since moving hosts to GoDaddy from WebHost4Life. I would like to say the process was painless, but I had to call tech support at GoDaddy 5 times to get my hosting account setup right (Win Web vs. Windows). This undocumented but important configuration settings, will determine if you can or cannot run PHP on Windows at GoDaddy. So if you plan to sign-up for a Windows account and to run a PHP application (like WordPress), then make sure you call and order a “NON-GRID” account…

Now that things are back up and running I’ve noticed that I’m getting lots of hits on my old permalinks. I had some problems in moving things off my old host since each button click on their site would take 3-5 minutes (and time out 90% of the time). To help track what was happening, I set my 404 to go to WordPress’s “index.php” so I could track the hits and throw a pretty 404 message. After getting something setup, I decided I wanted to show a list of my posts so the user could quickly find what they are looking for. Since people only hit my Web Development posts, I inserted the logic below into my 404.php using the WordPress Editor under Appearance.

<?php
	$out = '';

	$query = array();

	$query[] = 'category=web development';
	$query[] = 'numberposts=100';

	$posts_to_show = get_posts( implode( '&', $query ) );

	$out = '<ul>';

	foreach ($posts_to_show as $post_to_show) {
		$permalink = get_permalink( $post_to_show->ID );
		$out .= <<<HTML
		<li>
			<a href ="{$permalink}" title="{$post_to_show->post_title}">{$post_to_show->post_title}</a>
		</li>
HTML;
	}

	$out .= '</ul>';

    echo $out;
?>

This little snipped is based on the WP Show Post Shortcode. I basically hard coded the query category and set the number of result to 100. This results in a list of my last 100 blog posts, sorted by date posted.

If you want to see it in action, type in a invalid URL to my site and you’ll get a 404 with a list of links to my blog posts!

Note… This is not going to help with search engine rankings because it’s a 404 error page. My goal was to give the lost visitor quick access to my development posts, since I’m 99% sure they used an old link indexed by Google.

ASP.NET JSON Response to Dynamic HTML Table

Do you need a way to quickly display a HTML table of data from a JSON response, I got a solution. The key to doing all this is just a little JavaScript to parse your JSON results. By looping over the array of objects, you can build a dynamic table (no predefined columns) in just a few seconds. This solution can be used for any language, since I use .NET / jQuery most of my day, this is a ASP.NET demo.

If you just want to see a full working demo, download the sample project at the bottom of the article. It’s a VS 2008 project build-ready!

This user control was an enhancement to an existing user control whose core functionality is to take a user supplied T-SQL statement and generate a nicely formatted HTML table. The existing control has worked great for years, but since I recently moved my code from 2.0 to 3.5, I figured I’d give this module a little attention. I found pages using the module to sometimes take 15+ seconds to load because of slow database servers. I enabled caching to try and work around the problem, but users told me that some queries need to be real-time. The solution was to move from a standard GridView control to a AJAX jQuery solution that would pulled the data and convert the results to a pretty HTML table. I used Google to search for a existing solution, but everything I found used 3rd party components or required a results (defined columns) template (e.g. Name and Object[i].Name). Since my tool is designed for ad-hoc queries, I wanted the rendering logic to require no predefined facts about the data.

My demo has 3 external requirements, which are all FREE (jQuery, JavaScript JSON helper, JSON.NET).

Newtonsoft.Json.dll
jquery-1.3.2.debug.js
json.debug.js

In addition to these requirements, I added a few layout files to the demo to make the HTML table look nice. Nobody likes a ugly demo…

images / loading.gif
styles / default.css

The only thing left was the results page, rendering JS logic and the web service.

Here is the JavaScript function used to convert a JSON array of objects into an HTML table.

function CreateDynamicTable(objArray) {
    var array = JSON.parse(objArray);

    var str = '<table class="lightPro">';

    str += '<tbody><tr>';

    for (var index in array[0]) {
        str += '<th scope="col">' + index + '</th>';
    }

    str += '</tr>';

    str += '</tbody><tbody>';

    for (var i = 0; i &lt; array.length; i++) {
        str += (i % 2 == 0) ? '<tr class="alt">' : '</tr><tr>';

        for (var index in array[i]) {
            str += '<td>' + array[i][index] + '</td>';
        }

        str += '</tr>';
    }

    str += '</tbody>'

    str += '</table>';

    return str;
}

Pretty simple, right… I pass in an object array that I iterate over to grab all the row/column values. I use the first row to grab the column names and place them inside the table header.

[Update 4/30/10] – I created 2 function that are improvements on this in a new posting. Check it out to see example use and download the JS code. Much better solution than hard-coding the values. Read Now…

Here is the actual page that makes web service call and returns displays the HTML table.

<script src="scripts/jquery-1.3.2.debug.js" type="text/javascript"></script>
<script src="scripts/json.htmTable.js" type="text/javascript"></script>
<script src="scripts/json.debug.js" type="text/javascript"></script>

<script type="text/javascript">
        $(document).ready(function() {
            $.ajax({
                type: "POST",
                url: "DemoSvc.asmx/GetJSONTableContents",
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                data: "{}",
                success: function(res) {
                    $('#<%= DynamicGridLoading.ClientID %>').hide();
                    $('#<%= DynamicGrid.ClientID %>').append(CreateDynamicTable(res.d)).fadeIn();
                }
            });
        });
</script>

<form id="form1">

<div id="DynamicGrid">
<div id="DynamicGridLoading">
            <img src="images/loading.gif" alt="" /><span> Loading Data... </span></div>
</div>

</form>

The code is pretty simple and there is no code behind, everything is done in the web service and injected via jQuery. The could have be done in a HTML page or any other language, nothing specific to ASP.NET right here.

The code for the web service is pretty basic, just be sure to look at the Service() method that is used to convert the DataTable to JSON. This is where the magic happens. This method is a wrapper to the JSON.NET library and chooses some default settings needed for the conversion.

In summary, this little switch removed the the blocking being caused by my slow SQL queries. My pages now instantly load **POOF**. During loading, they display a loading icon/text so the users know something is loading! Since caching was already being used for some queries, I change the code to cache the DataTable in the web service call.

VS2008 – ASP.NET DataTable to JSON to Dynamic HTML Table Demo

T-SQL Stored Procedure (Optional Parameters)

I use a lot of queries to drive reports and interactive search tools and the most common requirement of all my queries is optional parameters. I’ve used a ton of different methods to pull this off in the past, but I primarily use one approach when I build stored procedures that will be used with optional parameters that is efficient and easy to read.

-- //////////////////////////////////////////
-- Stored Procedure - Optional Params (NEW WAY)
-- //////////////////////////////////////////
CREATE PROCEDURE GetOrders
	@OrderNumber VARCHAR(10),
	@TrackingNumber VARCHAR(10)
AS
	SELECT * FROM SALES
	WHERE (@OrderNumber IS NULL OR @OrderNumber = '' OR ORDERNUMBER = @OrderNumber)
		AND (@TrackingNumber IS NULL OR @TrackingNumber = '' OR TRACKINGNUMBER = @TrackingNumber)

-- //////////////////////////////////////////////////
-- Stored Procedure - Optional Params - Dynamic SQL (OLD WAY)
-- //////////////////////////////////////////////////
CREATE PROCEDURE GetOrders
	@OrderNumber VARCHAR(10),
	@TrackingNumber VARCHAR(10)
AS
	DECLARE @STRSQL VARCHAR(200)
	SET @STRSQL = 'SELECT * FROM SALES WHERE 1=1 '

	IF @ROBINCODE <> ''
	BEGIN
		SET @STRSQL = @STRSQL + ' AND ORDERNUMBER = ''' + @OrderNumber + ''''
	END

	IF @DELTOCODE <> ''
	BEGIN
		SET @STRSQL = @STRSQL + ' AND TRACKINGNUMBER = ''' + @TrackingNumber + ''''
	END

EXECUTE(@STRSQL)

As you can see, the “New Way” has a lot less code and does not use dynamic T-SQL. In addition to checking for NULL parameter values, I also suggest you check for empty strings since text boxes normally return an empty string.

Now you have the choice of sending NULL or “” and the criteria will be skipped.

Download Your Site to HTML

HTTrack website copier is awesome and FREE! I’ve been using it for a few months and I’ve already saved myself and my team members hundreds of hours by downloading full working copies of various websites locally.

Here are a few examples of websites I download.

  1. A client was having problems with their hosting company who refused to give them access to their source code (developed by a 3rd party) to their CF website. After weeks of trying to solve the problem, they decided to switch hosting companies. During their migration to a hosting company and a new CMS solution, they wanted to keep a copy of the old site online. Regardless of the engine running the website, HTTrack was able to download a working copy of the site to HTML, minus the dynamic features (Search, CMS Admin, etc…) in a few minutes.
  2. We inherited a old web server at working running Plone 2.0.4. The machine was running on old hardware that was failing (2/3 dead HDD, broken RAID 1) and Zope was randomly crashing. After spending 3 days trying to get the machine stable, I decide to convert the site to HTML and put on one of our newer “stable” web servers. The process was a little mess, since our plone instance pointed to folders paths vs. actual files (/index_html vs /index.hmtl) the final static site had a lot of weird and randomly directed links! The final static HTML site probably took ~8 solid hours of HTML clean-up, but there was also a lot of bad/obsolete content in the existing portal that made this process painful!
  3. I have a few websites I like to use as samples/demo that show better as interactive! Using HTTrack and controlling the depth of link following/downloading (like first page only), I can download an offline working copy of a sites main page! This is a great way to demo a feature/capability/competitor website…

A few other good things to know, a couple sites I’ve downloaded require you to login first. This isn’t an issue with HTTrack, since you can build your own cookies to apply to your capture. It was a bit tricky getting the cookie format correct (Used Netscape Cookie Format), but after a few minutes I was able to take the cookie data from IE/FireFox and have a perfect working cookie for HTTrack to use!

Once again, this is FREE utility that is handy to have around!

HTTrack WEBSITE COPIER

SubSonic 2.2 – RESTHandler (Add ProviderName Support)

I’ve been using SubSonic’s built-in RESTHandler for some basic AJAX features in my portal at work. This is a really nice extra feature, and provides a quick way to pull xml/json/rss from your DAL. I noticed that a few features based on this handler had stopped working a few weeks ago. I found that SubSonic’s RESTHandler had no way to pass in the provider name as part of the url. I currently have 5 different providers (databases) setup for SubSonic, so I needed a way to point to the various providers when calling a URL.

Edit the following file “SubSonic/HttpHandlers/RESTfullUrl.cs”

1. Add new property to access “ProviderName”

private string _providerName = String.Empty;
public string ProviderName
{
get { return _providerName; }
set { _providerName = value; }
}

2. Modify ParseURL()

private void ParseUrl()
{
//work backwards up the URL
//first thing, check for a query string and strip it
string workingUrl = _rawUrl;

if(workingUrl.Contains("?"))
{
//strip off the query string - it'll be used later
workingUrl = Strings.Chop(workingUrl, "?");

}

string[] bits = workingUrl.Split(new[] {'/'}, StringSplitOptions.RemoveEmptyEntries);
//the URL should be split out into
//protocol ("http");
//domain ("foo.com");
//service directory ("services");
//provider name ("northwinds")
//table or sp ("products")
//command.format ("list.xml")

//the index of each item within the URL
int commandIndex = bits.Length - 1;
int tableSPIndex = bits.Length - 2;
int providerIndex = bits.Length - 3;

//now check to see if the last item on the URL is a key, or
if(IsNonKeyCommand(workingUrl))
{
//the last item is the command - list/show/etc
//set it
//the last item is a key, set it, and then the command
_key = Path.GetFileNameWithoutExtension(bits[bits.Length - 1]);
commandIndex--;
tableSPIndex--;
providerIndex--;
}

//the command is the next item up the chain
_restCommand = DecideCommand(Path.GetFileNameWithoutExtension(bits[commandIndex]));

//evaluate the command
if(_restCommand == RESTCommand.Exec)
_spName = bits[tableSPIndex];
else
_tableName = bits[tableSPIndex];

_providerName = bits[providerIndex];

ParseQueryString();
}


Edit the following file “SubSonic/HttpHandlers/RESTHandler.cs”

3. Update GenerateReturnSet(), line #286.

q = new Query(_url.TableName);  //before
q = new Query(_url.TableName,_url.ProviderName);  //after


Here is an example using the new syntax that supports provider name.

http://localhost/services/northwinds/customers/list.xml

How to build a Date Dimension Table for OLAP

OLAP Date Dimension Table Script

This query will build a table called “OLAP_DATE_DIMENSION”, based upon a starting and ending date. As you can see all of the work is done by the “DATENAME” and “DATEPART” functions in MS SQL Server. If you want information on using these, Query Analyzer for MS SQL Server 2000 is great! I’m not sure what happened with MS, but post MS SQL Server the built in help documents suck and your better of using Google.

** This query has a little hard coded logic for my specific company’s fiscal year (April – March), adjust accordingly.

-- =============================================
-- Build Dimension Date Table
-- =============================================
DECLARE @StartDate as smalldatetime, @EndDate as smalldatetime

SET @StartDate = '04/01/2010'
SET @EndDate = '03/31/2011'

BEGIN
SELECT
DATEPART(dy, @StartDate) as DAY_OF_YEAR,
CASE
WHEN DATENAME(qq,@StartDate)-1=0 THEN
4
ELSE
DATENAME(qq,@StartDate)-1
END AS FISCAL_PERIOD,
DATENAME(m,@StartDate) AS MONTH_DESC,
DATEPART(m,@StartDate) AS MONTH_NUM,
DATEPART(qq,@StartDate) AS QUARTER_NUM,
CONVERT(smalldatetime, CONVERT(CHAR(10),@StartDate,110)) AS SALES_DATE,
REPLACE(CONVERT(CHAR(10),@StartDate,06),' ','-') AS SALES_DATE_SPL,
DATEPART(yy,@StartDate) AS YEAR_NUM,
DATEPART(d,@StartDate) AS DAY_OF_MONTH,
CASE
WHEN DATEPART(m,@StartDate)&lt; 4 THEN
DATENAME(yy,@StartDate)-1
ELSE
DATENAME(yy,@StartDate)
END AS FISCAL_YEAR,
CASE
WHEN DATEPART(m,@StartDate)&gt;3 THEN
DATEPART(m,@StartDate)-3
ELSE
12-(3-DATEPART(m,@StartDate))
END AS FISCAL_MONTH
INTO OLAP_DATE_DIMENSION

SELECT @StartDate = @StartDate + 1

END

WHILE (@StartDate &lt;= @EndDate)
    BEGIN
     BEGIN
     INSERT INTO OLAP_DATE_DIMENSION SELECT
     DATEPART(dy, @StartDate) as DAY_OF_YEAR,
     CASE
     WHEN DATENAME(qq,@StartDate)-1=0 THEN
     4
     ELSE
     DATENAME(qq,@StartDate)-1
     END AS FISCAL_PERIOD,
     DATENAME(m,@StartDate) AS MONTH_DESC,
     DATEPART(m,@StartDate) AS MONTH_NUM,
     DATEPART(qq,@StartDate) AS QUARTER_NUM,
         CONVERT(smalldatetime, CONVERT(CHAR(10),@StartDate,110)) AS SALES_DATE,
     REPLACE(CONVERT(CHAR(10),@StartDate,06),' ','-') AS SALES_DATE_SPL,
     DATEPART(yy,@StartDate) AS YEAR_NUM,
     DATEPART(d,@StartDate) AS DAY_OF_MONTH,
     CASE
     WHEN DATEPART(m,@StartDate)&lt; 4 THEN
     DATENAME(yy,@StartDate)-1
     ELSE
     DATENAME(yy,@StartDate)
     END AS FISCAL_YEAR,
     CASE
     WHEN DATEPART(m,@StartDate)&gt;3 THEN
     DATEPART(m,@StartDate)-3
     ELSE
     12-(3-DATEPART(m,@StartDate))
     END AS FISCAL_MONTH
     END
     SELECT @StartDate = @StartDate + 1
    END

jQuery AssignEnterKey Plug-in

A few days ago, I was trying to figure out what to do about a user control that had multiple fieldsets that needed different default buttons triggered when the enter key was pressed on the page… Too bad the first button on the page is the default, by default! Since there was ~4 different forms that each had their own button, I came up with a hard-coded solution that solved the problem. Somebody suggested I make a jQuery Plug-In, so I tweaked the code a bit more and came up with the following plug-in below (I don’t think this qualifies as anything amazing, but it for the simpleton like me, it has a place).

A few examples on how to set the default to a field/form/etc…

1. You have 2+ fieldsets with their own button to perform an action: $(“FIELDSET”).AssignEnterKey();

2. A div called Search that provides site wide search: $(“#Search”).AssignEnterKey();

3. Multiple look ups on one page that have the class name Lookup: $(“.Lookup”).AssignEnterKey();

/*
* jQuery AssignEnterKey Plug-in *
* Summary This plug-in will attach a click/blur event to all input elements
 inside a parent container, the events will add and remove a class to the
 parent container that is used to associate the enter key with the a
 button or submit button.

* * Notes Using the options, you can target a different controls for both
 triggering the focus and the target to be executed.
* * Example $("FIELDSET").AssignEnterKey();
* * Developed by Zachary.Hunter@gmail.com * Date 2009-05-29 23:05
*/

(function($) {
   $.fn.AssignEnterKey = function(options) {
      var defaults = {triggerExp : ":input", targetExp : ":submit,:button" };
      var options = $.extend(defaults, options);

      $(document).keypress(function(e) {
         if ((e.which &amp;amp;&amp;amp; e.which == 13) || (e.keyCode &amp;amp;&amp;amp; e.keyCode == 13)) {
            $(".EnterKeyAssigned").find(options.targetExp).click(); return false;
            }
         });

      &nbsp;return this.each(function() {
         var obj = $(this);
         obj.find(options.triggerExp)
             .click(function(){ obj.addClass("EnterKeyAssigned"); })
             .blur(function(){ obj.removeClass("EnterKeyAssigned"); });
         }
      );
   };
})(jQuery);

I think this came out much more usable than my original code. I setup two parameters you can over-ride so it can look for specific trigger or target controls.

Overall, this is very basic plug-in and I can already see a few things that should be addressed (What to do when nothing is selected? What to do when you have multiple buttons in a container?). I think I’ll add a few more defaults options to address these. I’m very happy with what I got, since this was my first attempt at developing a jQuery plug-in…

Feedback is always appreciated!!!

[EDIT 5/19] by Zach : I have submitted this to the jQuery website as a plug-in. Hopefully somebody else finds this useful…

Using jQuery Autocomplete Plugin with ASMX & XML

I’m really surprised that the jQuery auto complete plug-in does not support XML by default. Almost all web services have the option to transmit XML, that it should be a requirement that every Ajax plug-in or widget consumes XML. If your using ASP.NET 2.0 and you want to use the jQuery auto-complete plug-in, you’ll notice this isn’t an easy task. Since the default plug-in requires a flat file with a list of values separated by new row “\r”. To support this natively, you can use a ASHX handler, but I wanted to use my existing web services. To get around the problem, I created a “parseXML” function and added an if statement in the $ajax results function.

The change really came down to adding an if statement for dataType = “xml” and added a new property datakey to support the parsing of the xml file. This way, I can parse ASMX 2.0 Web Service calls or static XML files.

[UPDATE] There is a catch to “STATIC” xml data, the default plug-in is based on the assumption that your results (aka: XML data) will only contain filtered results. If your data contains all possible results, you’ll need to filter the data… See comments for instructions! The XML parsing is done using jQuery built-in XML parser, so you don’t have to worry about dealing with browser specific parsers.

Here is the modified request function and the new parseXML function that I added to the “jQuery.autocomplete.js” plug-in:

function request(term, success, failure) {
	if (!options.matchCase)
		term = term.toLowerCase();
	var data = cache.load(term);
	// recieve the cached data
	if (data && data.length) {
		success(term, data);
		// if an AJAX url has been supplied, try loading the data now
	} else if ((typeof options.url == "string") && (options.url.length > 0)) {

		var extraParams = {
			timestamp: +new Date()
		};
		$.each(options.extraParams, function(key, param) {
			extraParams[key] = typeof param == "function" ? param() : param;
		});

		$.ajax({
			// try to leverage ajaxQueue plugin to abort previous requests
			mode: "abort",
			// limit abortion to this input
			// port: "autocomplete" + input.name,
			dataType: options.dataType,
			url: options.url,
			data: $.extend({
				q: lastWord(term),
				limit: options.max
			}, extraParams),
			success: function(data) {
				var parsed = [];

				// Added Logic by Zach
				// If dataType = "XML" use the parseXML fuction
				if (options.dataType == "xml") {
					parsed = parseXML(data);
				}
				else {
					parsed = options.parse && options.parse(data) || parse(data);
				}
				cache.add(term, parsed);
				success(term, parsed);
			}
		});
	} else {
		// if we have a failure, we need to empty the list -- this prevents the the [TAB] key from selecting the last successful match
		select.emptyList();
		failure(term);
	}
};

// Added Logic by Zach
// Added XML Parse Function
function parseXML(xml) {
	var parsed = [];

	$(xml).find(options.datakey).each(function() {
		parsed[parsed.length] = {
			data: [$(this).text()],
			value: $(this).text(),
			result: options.formatResult && options.formatResult($(this).text(), [$(this).text()]) || [$(this).text()]
		};
	});
	return parsed;
}

In order to use this new functionality, you’ll need to add the following options when you bind to a XML data source.

    $(document).ready(function() {
        $("#customerCode").autocomplete("Customers.xml", { dataType: "xml", datakey: "customer" });
    });

Here is the sample XML file used in the auto complete instance above.

<?xml version="1.0" encoding="utf-8" ?>
<CUSTOMERS>
    <CUSTOMER>Albert</CUSTOMER>
    <CUSTOMER>Allen</CUSTOMER>
    <CUSTOMER>Brandy</CUSTOMER>
    <CUSTOMER>Brink</CUSTOMER>
    <CUSTOMER>Cathy</CUSTOMER>
    <CUSTOMER>Candy</CUSTOMER>
    <CUSTOMER>Crystal</CUSTOMER>
    <CUSTOMER>Deon</CUSTOMER>
    <CUSTOMER>Dirk</CUSTOMER>
    <CUSTOMER>Erie</CUSTOMER>
    <CUSTOMER>Eve</CUSTOMER>
    <CUSTOMER>Francis</CUSTOMER>
    <CUSTOMER>Gina</CUSTOMER>
    <CUSTOMER>Harry</CUSTOMER>
    <CUSTOMER>Ingrid</CUSTOMER>
    <CUSTOMER>Janice</CUSTOMER>
    <CUSTOMER>Kelly</CUSTOMER>
    <CUSTOMER>Kylee</CUSTOMER>
    <CUSTOMER>Nathan</CUSTOMER>
    <CUSTOMER>Zachary</CUSTOMER>
</CUSTOMERS>

The only requirement for your ASMX web service, is your method signature must accept a string parameter called “q”. You can add additional parameters on your method, by default the auto-complete sends (q,limit,timestamp) but only “q” is required.

Here is a link to the official jQuery autocomplete plug-in.

Here is a complete ASP.NET 2.0 Web Application sample with the modified auto complete plug-in jQueryAutocompleteASMX.zip

[EDIT 5/14] by Zach :

$(document).ready(function() {
	$("#textboxCustomer").autocomplete("/Autocomplete.asmx/GetCustomers", {
		parse: function(data) {
			var parsed = [];

			$(data).find("string").each(function() {
				parsed[parsed.length] = {
					data: [$(this).text()],
					value: $(this).text(),
					result: [$(this).text()]
				};
			});
			return parsed;
		},
		dataType: "xml", datakey: "string", max: 5
	});
});

Using the function above, you can replace the built-in auto-complete parse function with a dynamic parser. The only requirement is returning string array, which is the same object returned by the default function. I think the original example is useful if you are using the jQuery plug-in with ASP.NET and .NET Web Services, but if I was only going to use the plug-in a few times… I’d use this solution. -Zach

[EDIT 5/19] by Zach:
Don’t forget the following lines should be added to your web.config under “<system.web>” to support GET & POST requests for your web services.

<!-- Added WebServices Section -->
<webServices>
	<protocols>
		<add name="HttpGet"/>
		<add name="HttpPost"/>
	</protocols>
</webServices>

jQuery REST Demo Application

Over the past few weeks I’ve been trying to learn how WCF can be used for REST. I found lots of information, but I also found lots of broken and incomplete demonstrations. Being new to WCF/REST, I really needed a working demo to see what I was doing wrong in my setup. After using the mighty Google for a few days and reading a ton of blog posts, I was able to piece together a working solution. I then played for a few days to make sure I could consistently reproduce working code and decided to build a working sample to show how to use WCF/ASMX services with ASP.NET 3.5 SP1 and jQuery. This demo was designed to give people the basic instructions on what to setup to be able to send/receiving data from the browser using jQuery with the server.

Demo 1-4 shows you the ASMX and the WCF equivalents for doing the basics (Get String, Get Array, Send String, Send Array), and Demo 5 shows you how to use a WCF Data Contract with a Wrapped/Bare message format. To make things easier to see, I log all JSON requests/results to the web page for inspection. This also allows you to see how requests are non-blocking and can be received out of order depending on processing time.

Demo Screen Shot

jQuery & JSON using ASP.NET

Download Demo Applicaiton

How to add a ASMX Web Service (ASP.NET Web Application w/.NET 3.5 SP1 Framework)

  1. Add New Item – Web Service
  2. Un-comment “[ScriptService]”

That’s it, your done. To access your web service via jQuery, use the following syntax.

    $.ajax({
        type: "POST",
        url: "WebService1.ASMX/HelloWorld",
        contentType: "application/json; charset=utf-8",
        dataType: "json",
        data: "{}",
        success: function(res) {
        // Do your work here.
        // Remember, the results for a ASMX Web Service are wrapped
        // within the key "d" by default. e.g. {"d" : "Hello World"}
        }
    });

How to add a WCF Service (ASP.NET Web Applicaiton w/.NET 3.5 SP1 Framework)

  1. Add New Item – WCF Service
  2. Right-click on the service and choose “View Markup”.
  3. Add the following line to the ServiceHost declaration: Factory=”System.ServiceModel.Activation.WebServiceHostFactory”
  4. Add a reference to the project for “System.ServiceModel.Web”.
  5. Open the interface associated with the service (e.g. IService1.cs) and add the following using statement “using System.ServiceModel.Web;”.
  6. Inside the interface, set the attributes on the method to support JSON & POST or GET request types.
  7. [OperationContract]
    [WebInvoke( Method = "POST", ResponseFormat = WebMessageFormat.Json)]
    
  8. Open the web.config and comment out the following lines:
  <!--<behaviors>
   <endpointBehaviors>
    <behavior name="Web.Service1AspNetAjaxBehavior">
     <enableWebScript />
    </behavior>
   </endpointBehaviors>
  </behaviors>
  <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
  <services>
   <service name="Web.Service1">
    <endpoint address="" behaviorConfiguration="Web.Service1AspNetAjaxBehavior"
     binding="webHttpBinding" contract="Web.Service1" />
   </service>
  </services>-->

That’s it, your done. To access your web service via jQuery, use the following syntax.

    $.ajax({
        type: "POST",
        url: "Service1.svc/DoWork",
        contentType: "application/json",
        dataType: "json",
        success: function(res) {
        // Do you work here.
        }
    });

I found a lot of blog posts pointing to WCF Service Factory projects, which require a completely different setup. The sample above is for adding a WCF Service to a Web Application.

While debugging this, I used Firebug for Firefox and Fiddler to see the ajax & JSON data fired between the client and server. When you use fiddler to debug a local website, you need to add a “.” after the domain name to cause fiddler to catch the traffic (e.g. http://localhost.:1234/WebSite1). Hopefully the demo compiles and runs without problems, so you can see a quick 1-2-3 on how to get WCF up and running quickly!

I’m by far no expert on these technologies, so if anybody has suggestions on what I could be to make this demo better… Let me know! Hopefully seeing a working demo will help somebody else get a jump start on using jQuery to consume REST services.

Merge and Combine PDF’s in C#

I was working on a email invoicing application a few days ago, and I needed a way to consolidate multiple PDF invoices (approximately ~50-200 invoice p/week). I wanted something that was free and worked with C#, I found PdfSharp and after looking at the demo’s I came up with a quick solution. The process is really simple and can be summarized as (Create a new empty PDF, open each existing PDF and copy their pages into the new PDF). Here is a code snippet of how I combined the PDF files, using a DataReader that has the source URL of each invoice:

	// ********************************************************************************************* //
	// This is just a simplified excerpt, I was looping over all the invoices from the previous week
	// by customer, and aggregating their PDF invoices into a single document.  In addition to the
	// generating the PDF, I used the DataReader to also generate a CSV file of their invoice details.
	// ********************************************************************************************* //

	// Combined PDF
	var customerPdf = new PdfDocument();

	// Hold copy of last document
	string lastDocumentUrl = string.Empty;

	while (dr.Read())
	{
		// Get current document
		string documentUrl = dr["InvoiceURL"].ToString();

		// Check if last document added is the same as current document
		if (lastDocumentUrl != documentUrl)
		{
			// Set the last document equal to the current document
			lastDocumentUrl = documentUrl;

			// Get the current file (e.g. \\\\\.pdf)
			string filePath = documentUrl;

			// Read in the existing document
			PdfDocument inputDocument = PdfReader.Open(filePath, PdfDocumentOpenMode.Import);

			// Get # of pages
			var count = inputDocument.PageCount;

			// Add each page to "Combined PDF"
			for (int idx = 0; idx &amp;lt; count; idx++)
			{
				PdfPage page = inputDocument.Pages[idx];
				customerPdf.AddPage(page);
			}
		}

	}

Tracking User Statistics with ColdFusion

I love to dabble in new things, about 3 months ago I started helping a friend fix up their Cold Fusion 7 website. I noticed a bunch of things I consider important missing from the site and was curious about the user demographics/stats (who are they, what browser are they using, what is the max resolution we can use on the site, etc…). After talking a bit, I decided to implement the following items to gather the data:

  1. Implement AWSTAT to monitor the website logs.
  2. Implement a custom internal stat tracker to gather info about users.

Here are the fields we required:

  1. User Name
  2. Browser Type & Version (e.g. IE5, IE6, IE7, FF3, Chrome ???)
  3. Operating System (e.g. Linux, Max, Windows)
  4. Default Language Supported (e.g. en, jp )
  5. Screen Resolution (e.g. monitor resolution 1024×768, 1280×1024, or 800×600)
  6. Screen Colors (e.g. 16bit, 24bit, 32bit)
  7. IP Address
  8. URL (with only the first query string parameter, since he uses FuseBox)
  9. Date
  10. Time

Some of the variables in our list are not available in Cold Fusion, they are browser based data that the DOM has available via JavaScript. To get the data to the server I created a a page called “count.cfm” to accept a query string parameters with the data from the client. This is just JS 101, as far as I know there is no other solution to getting client side information about the user to the server. There are a lot more stats you can obtain from the client, to find them look at the JavaScript “screen” and “navigator” objects . I started to stub out a little extra in my code below (e.g. Navigator.javaEnabled ), if you want to use this data you’ll need to add the logic to add the data in a GET request to “count.cfm”.

    var file = '/fusebox/count.cfm';
    w = screen.width;
    h = screen.height;
    v = navigator.appName;
    j = navigator.javaEnabled();

    if (v != 'Netscape') {
        c = screen.colorDepth;
    }
    else {
        c = screen.pixelDepth;
    }

    info = 'res=' + w + 'x' + h + '&amp;js=' + j + '&amp;col=' + c;

    document.write('<img style="display: none;" src="http://www.zachhunter.net/blog/' + file + '?' + info + '" alt="" />');

After the data is received in the “count.cfm” page, I normalize the data by replace some of the data with foreign keys. This keeps the stat log normalized and decreases the table size.

function LogStats()
{
if(isDefined("session.userid"))
{
    UserId = session.userid;
    UserName = session.username;
    AuthorityLevel = session.authoritylevel;
}
else
{
    UserName = "Anonymous";
    UserId = "";
    AuthorityLevel = "";
}

OS = getOs(CGI.HTTP_USER_AGENT);
BrowserType = getBrowser(CGI.HTTP_USER_AGENT);

if(isDefined("CGI.HTTP_ACCEPT_LANGUAGE"))
{
    Language = CGI.HTTP_ACCEPT_LANGUAGE;

    if(Language.IndexOf(";") GT 0)
    {
        Language = Mid(Source, 1, Source.IndexOf(";"));
    }
}
else
{
    Language = "(Unknown)";
}

IP = CGI.REMOTE_ADDR;
LogDate = DateFormat(Now(),"mm/dd/yyyy");
LogTime = TimeFormat(Now(), "hh:mm");

if(isDefined("CGI.HTTP_REFERER"))
{
    Source = CGI.HTTP_REFERER;

    if(Source.IndexOf("&amp;") GT 0)
    {
        Source = Mid(Source, 1, Source.IndexOf("&amp;"));
    }
}
else
{
    Source = "(Home Page)";
}

if(isDefined("CGI.HTTP_REFERER"))
{
    Referrer = CGI.HTTP_REFERER;
}
else
{
    Referrer = "";
}

if(isDefined("url.res"))
{
    Resolution = url.res;
}
else
{
    Resolution = "(Unknown)";
}

if(isDefined("url.js"))
{
    JavaScriptEnabled = url.js;
}
else
{
    JavaScriptEnabled = "(Unknown)";
}

if(isDefined("url.col"))
{
    Colors = url.col;
}
else
{
    Colors = "(Unknown)";
}

IdOs = GetIdOS(OS);
IdColors = GetIdColors(Colors);
IdRes = GetIdRes(Resolution);
IdBrowser = GetIdBrowser(BrowserType);
IdLang = GetIdLang(Language);
IdPath =  GetIdPath(Source);

Insert = "SET NOCOUNT ON;
          INSERT INTO tbl_Stats_Stats
          (username, [date], [time], ip, osid, colorid, browserid, resid, pathid,languageid)
          VALUES
          ('#UserName#', '#LogDate#', '#LogTime#', '#IP#', #IdOs#, #IdColors#, #IdBrowser#, #IdRes#, #IdPath#, #IdLang#);
          SELECT @@identity as LanguageId from tbl_stats_Languages";
InsertResults = cfquery(dsn="",sqlstring=Insert);
return InsertResults.LanguageId;
}

This probably took about 2 hours to throw together. I’m sure there is more than can be done to optimize the code but this way my first time programming with Cold Fusion. Additionally, since my script is targeted for an intranet application, I didn’t implement parameter validation which is a must for external websites! The largest amount of work was building the logic to parse the USER_AGENT so I could determine the browser and OS. These are very important, since our goal was to determine how we could redesign without negatively impacting the users.

This was deployed by putting the JS into the footer template (footer.cfm). The database script is included in the source code download, along with a data reset script.

Hopefully the logic or code can help you solve your problem.

Cold Fusion Web User Statistics

ASP.NET Mixing Forms & Windows Authentication

This logic can be used to give you a SINGLE SIGN-ON / AUTOLOGIN solution with ASP.NET 2.0 or higher using a combination of authentication providers on IIS 5.1 (Windows 2000). The demo is using C# and does require a network administrator account to make the magic work. I’ve tried using other accounts, but was only able to get this to work with a “Domain Administrator” account.

I tried so many solutions (application sub folder using Windows auth, web service, every possible IIS 5.1 settings, WMI query) and nothing worked. After trying to figure this out for a few weeks, I realized that I had a single sign-on solution being used for our SonicWall content filtering system appliances. I looked into how this was being done and I decided to put the same logic in the page_load event of my log in control.

After an hour of playing around, I had a working ASP.NET single sign-on solution. Using impersonation, I create a thread that uses a domain administrator account (this is a requirement, which is also required by the SonicWall solution) to perform a remote call to the client workstation to obtain the currently logged on user(s). The magic is done by using “netapi32.dll”, by passing the NetWkstaUserEnum method a hostname your able to obtain an array of users logged into the computer.

protected void Page_Load(object sender, EventArgs e)
{
     if (!IsPostBack) { // Attempt Autologin
         string account = String.Empty;
         string domain = String.Empty;
         string user = String.Empty;
         string email = String.Empty;

         try
         {

             ImpersonateUser iu = new ImpersonateUser();
             if (iu.impersonateValidUser("", "", ""))
             {

                 NetWorkstationUserEnum nws = new NetWorkstationUserEnum();
                 string host = nws.DNSLookup(Request.UserHostAddress);

                 string[] users = nws.ScanHost(host);

                 if (nws.ScanHost(host).Length &amp;gt; 0)
                 {
                     account = users[users.Length - 1];
                     domain = account.Substring(0, account.IndexOf("\\"));
                     user = account.Substring(account.IndexOf("\\") + 1, account.Length - account.IndexOf("\\") - 1);
                 }

                 iu.undoImpersonation();
             }
         }
         catch (Exception logex)
         {
             Log.Debug("Autologin Failure: " + logex.Message);
         }

         if (!String.IsNullOrEmpty(account))
         {
             Log.Info("Account: " + account);

             if (domain.ToUpper() == "")
             {
                 email = user.ToLower() + "@";
                 String strRole = AssignRoles(email);

                 if (!String.IsNullOrEmpty(strRole))
                 {
                     FormsAuthentication.Initialize();

                     //The AddMinutes determines how long the user will be logged in after leaving
                     //the site if he doesn't log off.
                     FormsAuthenticationTicket fat = new FormsAuthenticationTicket(1,
                         email, DateTime.Now,
                         DateTime.Now.AddDays(7), true, strRole,
                         FormsAuthentication.FormsCookiePath);

                     HttpCookie ck;
                     ck = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(fat));
                     ck.Expires = fat.Expiration;
                     ck.Path = FormsAuthentication.FormsCookiePath;

                     Response.Cookies.Add(ck);
                     Response.Redirect(FormsAuthentication.GetRedirectUrl(email, true));
                 }
                 else
                 {
                     Log.Info("Unable to create FAT, user does not have any roles assigned.");
                 }
             }
         }

     }
}

I always take the last user in the list, since the machine account “domain\machinename$” is always first. I’ve not had it return the wrong person yet. Even if a user is not automatically logged in, they can still manually enter their log in/password to access the site.

** On my development machine, when I checked the array results I would see a bunch of accounts that were generated each time I ran VS. Each time I ran my demo application, I’d see a different account like (ASPNET) as the last user. I think this has something to do with the thread VS users for the built-in web server.

Too keep my original Forms Authentication solution in place, I parse the account (e.g. DOMAIN\User => User) and I check the Forms authentication database for the user’s name, skipping the password check. If they exist, I creates a FormsAuthenticationTicket and the process is complete.

** This code is part of a C#, ASP.NET Website Project. I’ve tested this code on a few websites all using C# and this does work. The cavaet of requiring a high-security account is a pain but it works. If anybody knows of another way to pull this off, let me know…

By default, the log in control is only displayed when a user is not authenticated. This is because the auto log in logic is hooked onto the modules load event.

This solution is currently working on Windows 2000 with IIS 5.1, for a portal developed with ASP.NET 2.0 and C#. The portal uses forms authentication but existing on an internal domain. The site has a a few hundred users, some are remote agents that connect via VPN and are not part of the domain.

This has made a huge difference, since we now set the company default web page to the site and key that they’ll be able to see everything since we’ve automatically logged them in. The first day I made this change, I got 3 emails about the some major changes users saw done to the site. I found this comment really funny, since we didn’t make any changes to the content… it just so happened that this was the first time they had ever logged in!

Good References for the solution:

http://www.codeproject.com/KB/IP/LoggedOnUsersPart2.aspx (had a good note about security)

Other Notes:

I found another solution saying that a WMI Query should be able to obtain the same data, but when I tried this using impersonation it did not work. I read that WMI queries, even when run via Impersonation, are limited by the ASPNET WP thread. I can’t explain the details but I did find two posts about this and took them at face value.

You can download the C# ASP.NET code here. This is an excerpt from my project but it’s working code. If anybody else has a working solution that does not require a domain admin account, let me know. Having a Single Sign-On (“SSO”) solution for an ASP.NET intranet site is awesome!

FormsWindowsMixxed_Source.zip