Monday, April 23, 2012

Vote For Leon. Shining Like A Shaft of Gold When All Around is Dark

It is that time of year again when the vague promise of a vote will guarantee a free beer/lunch/shoe shine.

If you find my random mental eruptions influential, feel free to vote: http://www.dynamicsworld.co.uk/top-100-voting-page-5/ If not, vote for everyone else.

If I make the cut, I promise to continue to write handy tips and tricks, do the odd book review (hint, hint publishers) and highlight why the rudderless SalesForce ship is lurching unsteadily onto the rocks of bankruptcy and ruin, while the SalesForce executives jump into their golden lifeboat crafted lovingly from relentless share sales. If you are a SalesForcer and are gnashing your teeth, that means I’ve influenced you so do the right thing. If you are laughing because you know it is true, vote for me as a beacon of light and purity in a world of marketing hype and hyperbole.

Otherwise vote for Matt Wittemann. He’s a good bloke and will keep me in Mac and Jacks on my next visit to Seattle if he gets up Winking smile

Outlook’s Best Kept Secret: Shortcuts

I am quite a power Outlook user and, given the tight integration Dynamics CRM has with Outlook I often find myself poking around in people’s inboxes. One feature, which is often overlooked, and which is indispensible to me, is Outlook shortcuts. If you loved using Outlook favourites with CRM but have discovered this is no longer an option in 2011 (short of registry hacking) this is a great alternative.

If you are unfamiliar with Outlook shortcuts, it is the little image icon at the end of the other icons

image

Clicking on it shows a big screen of nothing.

image

Not impressive so far.

So What Is It Good For?

Essentially, you can add any folder as a shortcut. This can be a search folder, an RSS feed, a Calendar, a CRM folder or anything else within the Outlook folder structure. No more jumping around to find the right information.

Configuring Shortcuts

To configure them, you create a group and then add shortcuts to it. Right-click on existing group to do this.

image

Most of the options are quite self-explanatory, so I will not go into more detail.

Once set up you will have the dozen or so folders you reference in a day with nothing else to get in the way. Once you have added a new group, you can even delete the default Shortcuts group.

The only other trick is making Outlook start in shortcuts by default. This is achieved by right-clicking the icons at the bottom and selecting Navigation Pane Options.

image

Move Shortcuts to the top and you’re done.

Here is a shortcut group I have prepared combining Outlook and CRM elements.

image

Combined with Outlook rules, it becomes easy to route e-mails to different folders and display them for review. Then with quicksteps, moving e-mails out of the folders to the right archive folder, once they have been read, is as simple as a click.

image

Here is my shortcuts on my work machine (no CRM in this one as it is a new machine and I have not got around to installing the client yet).

image

In this case I am combining folders from two different sources; my work e-mail server and my personal e-mail server (yes, I am a little behind in my personal e-mail) and splitting them into two shortcut groups. No more browsing through two trees to find the folders you want. Shortcuts include e-mail folders, tweet lists (via TwInbox), my tasks, RSS feeds (via searches to further refine what is returned to only those which are unread) and my calendar.

Conclusions

Using Outlook shortcuts is a great way to display and access the information you need. In combination with Outlook rules and Outlook quicksteps, it becomes a case of clicking a folder, reading an e-mail and clicking a quickstep to pass it on to the right archive. If, like me, you are looking to move to a slate/tablet world, this combination essentially eliminates the need to right click or drag and drop and will save you significant time in reading and actioning your e-mails/checking your CRM Opportunities etc.

Monday, April 16, 2012

Moving To The Cloud Part Three: Shifting Data

For those that are still unsure, my last post ‘Microsoft’s Polar Data Centres and World Wireless System’ was posted on 1st April with good reason. It was an April Fool’s posting. I am glad to say Microsoft took it in good humor so I am still an MVP, for now.

The post is the final part in a series of three postings about my experience of moving to the cloud. In part one I outlined my problem. Essentially I was carrying around too much data (10 years of e-mails in a PST, 35G of historical data, much music) on my laptop and it was a data-loss disaster waiting to happen. Other frustrations included:

  • time being eaten up in doing regular backups and maintenance
  • having to haul my personal laptop everywhere to read my personal e-mail

I signed up to Office 365 with an intention of using this to solve some of my issues.

Part two talked about migrating my 12G pst file across to the Office 365 Exchange server. This was an excellent decision and, four months later, I have no regrets. I still have the niggle of the ‘on behalf of’ for mail sent out on gmail but this pales to the benefits. Mailbox usage is now at 7.4G

image

meaning I have gobbled up about 0.8G in four months which means I have about seven years to go before I reach my quota.

In this part I will talk about my movement of the 35G of data (plus music).

Going All-In With a Tablet

I live in Outlook. It is my one-stop shop for e-mails, rss feeds and tweets. Everything is in the one spot and, with Outlook rules, filtering what is important and what is ‘nice to read’ is relatively simple. I often see tablet devices, such as android tablets and the iPad but, without Outlook, their use for me is limited. Sure, you can read e-mail and then, with another app, you can check your calendar but the ‘dashboard’ of Outlook gives it to me in one hit. For me, it just works. I love the form-factor of these devices but the OS kills me.

To this end, I decided to take the plunge on a PC tablet aka slate. This is a device which looks like an iPad but runs Windows. CSG, where I work, has an annual gadget allowance of $500 so, as long as I could find a tablet for $500 or less, I could test the waters and not be out of pocket.

This gave me the opportunity to test out a site I have known about for a while, AliExpress. This is a retail site for the slightly more well-known Alibaba.com. Alibaba is a front-end for those manufacturing factories in places like China. If you want something, and brand is not so important, you can buy it on Alibaba; from bookmarks and necklaces to high-end electronics, it is there. In the case of Alibaba, you deal directly with the manufacturer and, generally, you will need to buy multiple units. Aliexpress deals mostly in single unit purchases and offers an escrow service as standard. In other words, when you pay, aliexpress holds the money and, when you receive the item and confirm it is what you paid for, they release it to the manufacturer. You get the goods at factory prices and there is no risk; everyone wins.

In the end, for US$472 I got:

  • 10 inch touch-screen tablet running Windows 8 Consumer Preview edition
  • Intel N570 dual core processor
  • Wi-fi, bluetooth
  • 2G RAM and 2*32G solid state drives and a micro-SD expansion slot

The memory was a bit low but, of all the machines on alibaba it was either high memory or a multicore processor but not both. My current laptop had 4G RAM but a single core processor and it drove me nuts so I was happy to try this configuration instead.

The transaction went without a hitch and I now use the tablet as my personal laptop. Generally I connect a USB mouse to it as my fingers are too large to interact with the screen on the higher resolutions, but this is fine and there are plenty of mini-mouses on the market. In hindsight I should have got 3G capability over bluetooth (it’s one or the other) but I can still tether to a device via USB so no big deal and wifi hotspots are becoming more and more commonplace.

So what does this have to do with moving the cloud? Well, a few things:

  • This is a no-brand machine direct from a factory in Guangzhou, China. Quality is not guaranteed.
  • It is running a beta operating system (Windows 8 Consumer Preview)
  • It has only got 64G of space

These factors means I have a strong motivation NOT to store any data locally. For what its worth, after a couple of weeks, the hardware has had no issues and the OS has only blue-screened once. With the page file and search index files moved across to the second drive, and Office 2010 installed, I still had about 15G spare on the primary drive, so lots of rooms for programs.

Using the Right Cloud Solution For The Right Data

Another recent purchase was a Windows Phone, a Nokia Lumia 800. As well as the clear parallels in the interface between Windows Phone and Windows 8 Consumer Preview, another clear message is that the SkyDrive is going to be a big part of the plan. If you are not familiar with SkyDrive, Microsoft offers 25G of storage space, in the cloud, for free. The ability to easily access SkyDrive is baked into Windows 8 and Windows Phone with the clear intention that this become the central store for all devices.

There are a few issues to consider with SkyDrive

  • There is no local cache of the data, so if you have no internet, you have no data
  • File sizes are limited to 100M (this may be increased to 300M in the future)
  • It is really only designed to handle certain file types i.e. pictures, videos and office files

The second two issues can be overcome by using Gladinet. Gladinet is a front-end to many different cloud storage services, including SkyDrive. It will auto-chunk larger files to comply with the 100M limit i.e. break them up for storage but still allow them to be re-constructed later and it will allow any kind of file to be uploaded with no auto-conversion which SkyDrive can do for things like picture files.

I got the Professional version through a scheme where you sign up for some online service in exchange for getting the software. In my case, I signed up for a trial online brain-training program and got the software for free. The Professional version made uploading my files much easier via the backup function.

image

After a couple of weeks, I had filled the 25G of space.

image

Via Gladinet, I can access this as a drive on my laptop (I have not installed Gladinet on the slate yet, but can still access the SkyDrive easily) and via WinDirStat can browse this drive to see where all the space is being taken up (although, as you can see above, the online interface to SkyDrive is also quite friendly in this regard).

So What About Data You Need When Offline?

One aspect a lot of cloud-zealots fail to cover is the issue of being offline. While SkyDrive is an excellent repository for archived data and things you need when online, there are some data you need offline. For example, I have a list of blog topics in a Word document. If I am on a flight with a few hours to spare, it is likely I will not have an internet connection but could use the time to write a blog article. This is where Mesh comes in. Mesh is part of the Microsoft Live Essentials. Mesh gives you 5G of additional storage which can be accessed in the cloud (think of it as a mini-SkyDrive) but it also syncs with a local folder on a PC or laptop.

image

I imagine SkyDrive and Mesh will eventually merge but, at this stage, this is not the case.

So, all up this gives me 30G of data for free, all in the cloud with the option of 5G of it being synced to a local folder. This is 5G short of my 35G of data which means I can use a third-party product like dropbox and a-drive, review whether I really need all my data or wait until Microsoft offer additional SkyDrive storage for a price. As you can see above, almost 5G of my SkyDrive is taken up by ‘Multimedia’. This is photos and videos of the kids. There are plenty of sites out there which house this for free so I am not overly concerned.

What About The Music?

For my music collection I am using Google Music, now Google Play. This lets me load up to 20,000 songs for free, all of which can be played via a browser. Although there is only an app for Android, I can happily play my music via Internet Explorer on my Windows Phone and set the web page as a tile, turning it into a pseudo-app. Hopefully, Google will extend Google Play to cover home movies, meaning I will have a one-stop shop for my music and movies. For those of you outside of the USA, when you initially sign up for Google Play, Google checks whether you are connecting from a US machine or not. If not, you are not allowed to use the service. Using TOR servers and web proxies do not fool it. Therefore, you may need a friend in the USA to help you with the initial log in. Once you set it up that first time, Google never checks your location again.

Conclusions

While it is still early days, I could not be happier with my decision to ‘go cloud’. E-mail, past and present, is available anywhere. My large backlog of data is only a browser away and, for those data which I need offline access for, this also is being auto-synced to the cloud and to all my other devices (personal slate, work pc, wife’s laptop etc.) Any one of my devices could catch fire tomorrow and I would not care. Also, moving to a new device is simply a case of telling it about my Office 365 account, telling it my live ID (for SkyDrive) and setting up Mesh. All the associated file movements happen automatically.

If you are considering doing something similar the barriers are quite low from a cost perspective (all of this is costing me just US$6 per month for e-mail and nothing for data/music) and there is nothing stopping you having your data in both SkyDrive and on a local store to see how it goes. My recommendation is give it a try and, like when you buy insurance, my prediction is you will feel a zen-like relief that all the worries are now someone else’s problem.

Sunday, April 1, 2012

Microsoft’s Polar Data Centres and World Wireless System

 

clip_image002[6]

Introduction

This past month has been quite unique for me with many messages going back and forth between myself and various Microsoft departments. Microsoft, being the reasonable people they are, has given me the green light to talk about one of their major projects. They were planning to announce it at the Worldwide Partner Conference (WPC) but are happy for some information to be leaked early as a bit of a teaser i.e. what was talked about at the MVP Summit plus a few bits I have gathered from the net to put the pieces together.

How It All Began

Back in December, Microsoft announced the dates of the MVP Summit and sent out their usual pack of information about what products were going to be covered, the agenda, likely topics etc. My pack had an additional insert referring to a ‘special feedback session’ on the Wednesday (29 February). As I discovered a little later, only a few of us in the CRM MVP group had this. It also included a ‘special NDA’ agreement.

I am yet to discover how it happened but, in their rush to put the inserts in the pack, Microsoft appears to have used one of their standard journalist embargo documents as a template. The upshot being while the standard MVP NDA essentially says I can say nothing until Microsoft makes information public, this one just restricted public disclosure for 30 days i.e. until the end of March.

Here is the first page of the document (I’ve blurred out the body as it mentions dates and deadlines for the project which Microsoft do not want to be made public yet.)

obscured nda

Essentially you signed the document either physically or online and you were good for the three-hour session on the Wednesday morning.

How Did the Session Go?

The session was attended by MVPs across various ‘Microsoft cloud products’ e.g. SQL/Windows Azure, Dynamics CRM etc. but many may not have realised the loophole yet but, once they see my article, I imagine they will all be writing a piece on their take on the session.

The plan was for key Azure staff to be present but the Azure Leap Year bug meant all were out of action. However, key staff on the project, including some who had been on site, were there to present. We heard from two speakers mainly, both from Microsoft’s research division; Dr. Charles Lippit who specialises in transmission technologies and Max Bosgrove who was one of the pioneers of the server container design used by Microsoft throughout its data centers.

So What Is The Big News?

In short, Microsoft are halfway through building two major data centers at the north and south pole and using the Earth’s magnetic field for transmission. The centers will be complete near to the time of WPC (July this year), thus the announcement timing.

Given the technical nature of the subject, Microsoft only invited MVPs with an established physics or electrical engineering background. Of the CRM MVPs, two of us qualified. Myself (my university degree was in quantum mechanics) and George Doubinski (whose background is in nuclear physics).

Where Specifically Are They Located?

While the magnetic poles do shift over time, Microsoft has located the bases as close as they practically can to the magnetic poles’ current locations. In the Antarctic, this is Vostok Station. Its proximity to the south pole and existing magnetic research equipment there made it ideal. Drilling towards the magnetic pole, and also towards the essential thermal vents, had been started by the Russian researchers with drilling stopped last year with only a few hundred meters of ice to go (http://www.jalcnews.com/?p=3204).

In the Arctic, construction has begun on Ellesmere Island. The northern facility is starting from scratch, with learning from the construction of the southern base feeding into the design on the northern base.

How Will It Work?

Microsoft’s aim is for the stations to be self-operating, given the remote location. No ice trucks in Antarctica. Fortunately, Microsoft has already worked on a similar project with Antarctica New Zealand and extended it for remote management.

The container server farms, used by Microsoft in their data centers, were built to allow for scalability. A secondary advantage is that they are easy to ship and are ‘rugged-ised’ making them perfect for remote locations. Initially Microsoft is planning four containers at each location, giving a total of 8,000 servers. As Dynamics CRM Online runs on 1,000 servers (which seems incredulous to me), this should be plenty for an initial trial. Data will be backed up to the other station for redundancy.

Obviously cooling will not be a problem and the low temperatures will allow the servers to operate very efficiently. The lack of moisture in the air and the high wind means, in the case of Vostok Station, the air is blown directly through the server cluster. The plan is to emulate a similar setup at Ellesmere. The containers are traditionally water-cooled so to make them air-cooled required some minor modification.

In terms of power, while Vostok has its own power station, this would need to be manned for refuelling which rules it out. Both locations have geothermal sources so the plan is to use steam from these to generate power, as well as harness wind power, if available. After a bit of digging on the internet, I found this research with Microsoft’s Paul Allen as one of the investors. I am guessing this may be the source for this innovation. The heat removed from the servers will be recycled to either assist with steam generation or to heat the living quarters of the researchers.

In the case of the southern station, the thermal source lies underneath Lake Vostok. The plan is to work with the researchers to continue the drilling, ensure the lake remains untouched, while accessing the heat source for power generation. In return, Microsoft is funding all research conducted on Lake Vostok by the governing body, the Scientific Committee on Antarctic Research (SCAR).

clip_image003[6]

Ellesmere has sulphur springs which will be used in the same way, funding research and working with the University of Calgary's Arctic Institute of North America.

How Will Transmission Work?

This is the part of the project that piques my interest the most. In short, a large magnetic coil will be placed in the ground and be used to modulate the Earth’s magnetic field in much the same way a transmitter modifies a carrier wave to send a signal.

It turns out that the power required for this is quite small as the Earth is already generating the ‘carrier signal’ and the coil is merely fluctuating it to transmit data. While Microsoft believes they could cancel the magnetic field altogether, they see no benefit in this (and given the magnetic field of the Earth protects us from solar radiation via the Van Allen belt, that is probably a good thing). They also mentioned the ‘inductive feedback’ had a remote chance of reversing the magnetic poles but this was also dismissed as extremely unlikely.

The ‘signal’ then goes everywhere where a compass works. While large iron and magnetite deposits can interfere with the signal it is believed this will only affect a small fraction of urban areas.

Microsoft is referring to this as the WWS or World Wireless System. With exclusive access to the poles and a transmission system which can reach the world with minimal overheads, Microsoft sees this as their ‘next big thing’.

When Will The Project Be Complete?

This is the bit that Microsoft is very quiet about and thus the blurring of the NDA document. Construction can only really happen in the locations in the local summer, assuming weather conditions are right. This is why the Vostok data center compound is all but finished but Ellesmere is just beginning. Power generation via the thermal vents will be likely finalised next year but Microsoft is making no commitments on this.

Conclusions

My thoughts are this is an example of everything that makes Microsoft the great organisation it is. Innovative, game changing design combined with collaboration with government bodies and research organisations to generate a positive outcome for all involved. While Microsoft’s heritage is with traditional, on premise servers, this play, much like the Kinect play in the game console market, has the potential to put them in a commanding position of both the PaaS and SaaS markets.

Microsoft have created a web site for the WWS project with a lot more details which can be found here. Have a great day Winking smile

Saturday, March 24, 2012

Creating Icons for Dynamics CRM 2011

 

***STOP PRESS***

If you follow my tweets you will have seen the odd mention of a session at the MVP Summit regarding the new polar data centers Microsoft is rolling out. I am still under the embargo/NDA from Summit (and probably should not have mentioned as much as I have) but have finally been given the green light to talk about it now that Convergence is finished so this will be coming out next week. Serious game-changing technology on many levels and very exciting for Azure and CRM Online customers. Not to be missed. Now back to regular broadcasting…

The Problem

One of the key strengths of Dynamics CRM is the ability to go beyond the traditional domains of salesforce automation, case management and direct marketing. To give an example, the last two projects I worked on involved:

  • Contract execution for an international transportation company i.e. sourcing the goods, sourcing the transport and ensuring all customs regulations were followed
  • Management of the committees and the associated projects in developing international standards for industry

Both of these were pure CRM systems i.e. minimal third party add-ons and required minimal code. Neither fell into the classic baskets for CRM systems but this is the norm for Dynamics CRM.

But I digress, as this is not a blog post about xRM. An inevitable consequence of going outside the traditional areas is you must create additional entities to hold the data. Entities are the ‘record types’ or, for the more traditional of us the additional database tables to hold different kinds of data (it is not as simple as one entity = one table in Dynamics CRM but the analogy is sound).

Creating the entity is easy, as is linking it to existing record types in CRM. You simply hit the new button and give it a name.

image

One of the trickier aspects is updating the icons for the new entity. Sure, there is the ‘Update Icons’ button on the top of the entity screen but what then?

Well the screen that pops up highlights the problem nicely.

image

For any given entity we need to create:

  • A PNG file of 16x16 pixels
  • An ICO file of 32x32 pixels
  • A PNG file of 66x48 pixels

but there is nothing, directly in the product, to help us do it.

The Solution (almost)

For version 4, Tanguy (an all-round great guy) wrote an excellent article on how to use the Microsoft Demonstration Tool to automatically generate the icons and upload them to CRM. In fact the article was so good, these guys ripped it off a year later Winking smile

Despite our German plagiarists publishing the article under a 2011 blog, the tool does not quite work for CRM 2011. While you can connect the tool to an on-premise implementation, it will not connect to an online one. Also. while you can connect, you cannot directly upload.

So what do you do?

Well, assuming you have access to an on-premise implementation (perhaps you have a copy of the CRM demo VPC from a friendly partner) you can still create the icons.

First of all, download the Demonstration Tool program, run the exe and click the link at the top to connect.

image

The box that pops up asks for the usual login, password etc. Once in, click the Icon maker icon to take you to the icon maker page.

image

Click on the first Select Image… and add in a picture. Then click the ‘Use for all icons’.

image

The image above is taken directly from Tanguy (saved me firing up my VPC). He has chosen to click the ‘Add Background’ box as well. I generally don’t but it is up to you.

If this was CRM 4, at this point you would click the ‘Publish to CRM…’ load up the icons and congratulate yourself on a job well done. No such joy for us CRM 2011 folk. We need to click the ‘Save…’ button. The usual file save box opens up and while it will give all indication that it is just saving a png file it will, in fact, save three files, with semi-meaningful names (‘application’, ‘outlook’ and ‘form’), for manual uploading to CRM.

Loading up Icons

CRM 2011 treats images as Web Resources. So, in order to load an image up for our entity, we must add it as a Web Resource. Generally I do this on the fly. So firstly, go to the entity and click on the ‘Update Icons’ button. Now click on the first lookup.

image

Click the ‘New’ button to add a new Web Resource. Click OK a lot and you will have your first image in CRM. Rinse and repeat for the other two lookups and you are done.

Conclusions

The process for adding custom icons to CRM has not changed significantly, from memory, since version 3 so it is a little clunky. If Dynamics CRM was getting a performance review from its boss, this would fall under the ‘needs improvement’ column (and I am sure it is on the list back in Redmond). However, assuming you have access to an on-premise install, you can generate the required icon files relatively painlessly with the version 4 Demonstration Tools. Download it and go from image to image. If you feel Microsoft should invest some time in improving the icon updating experience, you can also drop by at http://connect.microsoft.com and add in a request. This list is the driver for new features in the product so have your say.

Sunday, March 18, 2012

Using An ‘OR’ Condition In Workflows

The Problem

One of the problems with workflows is that,while you can group conditions with an OR in Advanced Find, you do not have the same ability within workflows.

Here are the advanced find grouping buttons.

image

Here is the workflow box for a ‘Check Condition’ step.

image

We can list multiple conditions but these are automatically grouped with an ‘AND’ clause i.e. all must be fulfilled.

This issue has come up in the forums in the past and, usually I recommend the creation of two workflows: one for each condition. While this approximates an ‘OR’ it is a lot of work.

The problem with this solution is we now have two workflows to maintain which means more overhead.

An Alternative Solution

De Morgan’s Law tells us that ‘A OR B’ is logically equivalent to NOT (NOT A AND NOT B). So how does this help us?

Well, to deal with the first ‘NOT’ we use the ‘Default Action’, which gives us an ‘Otherwise’ to our ‘If-Then’

image

Therefore, if we say ‘if (NOT A AND NOT B) then do nothing otherwise do x’ we have a winner.

So, in practice, let us say we want the workflow to trigger if either of one of two tickboxes are ticked. For an advanced find it is east to find records where Tickbox A or Tickbox B has been ticked.

image

For our workflow it is going to read thus:

image

If we trigger this off of the two fields changing, if neither of the fields are ticked, nothing happens but if either or both are ticked our action happens, which is what we want.

Doing the Maths

If you have a more complex problem, this is how to approach it.

First of all we have De Morgan’s Law: A OR B = NOT (NOT A AND NOT B)

Let us define our logical tests:

A: Tickbox A = Yes
B: Tickbox B = Yes

Therefore:

NOT A: Tickbox A = No
NOT B: Tickbox B = No

So Tickbox A = Yes OR Tickbox B = Yes is equivalent to NOT (Tickbox A = No AND Tickbox B = No) and this is expressed as we saw above using an if-then-otherwise step.

Conclusions

Using this technique, while a little harder to read (although the comment line can help) we maintain just one workflow, saving on future administration.  The technique is also applicable to any condition which there is a straightforward ‘opposite’ version of and can be extended to multiple attributes, not just two, without having to resort to creating multiple, mostly identical workflows.

Saturday, March 10, 2012

Triggering a Workflow off a Field’s Initial Population

The Problem

When triggering workflows, there are seven options:

  • When the record is created
  • When the status of the record is changed
  • When the record is assigned (ownership changes)
  • When a field on the record changes
  • When the record is deleted
  • Manually (an on-demand workflow)
  • As part of another workflow (child workflow)

image

This covers a wealth of scenarios. However, there are times when we need more. Recently I worked on a project where we needed to trigger off the initial population of a field. The company was, in essence, a courier company. Customers booked international deliveries and, while the bookings were entered into the system when taken, it was only later that my client knew which plane was taking the goods.

Once the details of the plane was entered in the system, escalation checks needed to be put in place to ensure a paperwork was completed on time, deliveries reached their destination on time etc.

The obvious choice was to trigger off a field change but, if the plane details changed prior to take-off, this would trigger two escalation workflows as it is not possible in a workflow to check what the value was prior to field population. If the details changed a number of times, suddenly users are receiving multiple notifications for the same delivery. This was undesirable.

The Solution

The first option in dealing with this scenario is to use a check box on the record. When the workflow is first triggered, the tickbox is ticked and future workflows, checking for the tickbox no longer fire. Another related option is to have a field keep a copy of the previous value, managed through workflows. I outline this approach here.

This would have worked but there were a number of date fields that I needed to trigger off (about ten in total) and the overhead of adding another ten checkbox fields or adding fields and workflows was a bit high for me.

In the end I settled for a different approach; instead of using the ‘field change’ event, which can be triggered multiple times, I used the ‘record creation’ event which will only ever trigger once. The screenshot below shows the trigger on the ‘Vessel ETD’ field; the plane’s estimated time of departure (ok, so the project is on a v4 system, but the approach also works 2011).

image

Here we are saying when the delivery contract is entered into the system, fire the workflow and then wait until the ‘Vessel ETD’ field is populated. When this happens, fire a number of ‘checks and measures’ workflows.

These child workflows then kick in. Here is one of them:

image

In this case, once populated, we check the contract is a parent contract (the system allows for subcontracts on the same flight contract), double check the field contains data (not really needed) and then it sets up a reminder a couple of weeks before departure.

The advantage of this approach is that we guarantee that the workflow will only trigger once and, if we have a number of checks and measures running off of the same field, it is quite easy to manage this through the use of child workflows stemming from the one trigger workflow.

The disadvantage of this approach is, if there is significant time between the creation of the record and the population of the trigger field, we have a workflow in a waiting state gobbling up processor cycles. This is more of an issue if we have a high volume of contracts, which I did not have in this case.

Conclusions

If you are needing to trigger off the initial population of a field but do not want to fire the same workflow when the field gets updated there are a few approaches with workflows before resorting to code. Using a check box can work but can be tricky to manage and adds the overhead of a bunch of fields which are just used for workflow management. Adding a mini-audit function can also work but requires additional workflow and fields to be added to the system, again adding to workflow management.

By triggering off of the creation of the record, and waiting for the field to be filled in before acting, we create an intuitive system with minimal overhead in terms of fields or workflows which can be easily adjusted as the processes of the business evolve.