Inbound Actions

Managing inbound actions can be a nightmare. I often see the out-of-box inbound actions duplicated and modified. The duplication is not really the problem as such because we do require 3 inbound action per table we want to handle inbound emails for.

The problem is that nobody seems to consider centralizing the code so you only need to maintain the code in one place. The worst example I have seen the same code has been duplicated 200 times and if for some reason there is a change in how the emails should be registered all 200 inbound actions needs to be updated.

So I have designed a model for how I propose new inbound actions should be managed.

The code is based on the out-of-box inbound actions for create and update incident (“Create Incident”, “Update Incident” & “Create Incident (Forwarded)”).

The reply inbound action

This is how I would create the inbound action for new

(function runAction( /*GlideRecord*/ current, /*GlideRecord*/ event, /*EmailWrapper*/ email, /*ScopedEmailLogger*/ logger, /*EmailClassifier*/ classifier) {

  new global.InboundActionHelper().newEmail(current, event, email, logger, classifier);

})(current, event, email, logger, classifier);

The update inbound action

(function runAction( /*GlideRecord*/ current, /*GlideRecord*/ event, /*EmailWrapper*/ email, /*ScopedEmailLogger*/ logger, /*EmailClassifier*/ classifier) {

  new global.InboundActionHelper().replyEmail(current, event, email, logger, classifier);

})(current, event, email, logger, classifier);

The forward inbound action

(function forwardEmail( /*GlideRecord*/ current, /*GlideRecord*/ event, /*EmailWrapper*/ email, /*ScopedEmailLogger*/ logger, /*EmailClassifier*/ classifier) {

  new global.InboundActionHelper().newEmail(current, event, email, logger, classifier);

})(current, event, email, logger, classifier);

The script include

And then all the code would be in the script include, this code could be made more shorter. For example a lot of the code in new and forward is the same code and could be moved to a common function that is executed for both reducing the redundant code. Leaving only the code that really is specific for new, forward and reply in the specific functions.

var InboundActionHelper = Class.create();
InboundActionHelper.prototype = {
  initialize: function () {
  },

  newEmail: function (current, event, email, logger, classifier) {
    current.caller_id = gs.getUserID();
    current.comments = "received from: " + email.origemail + "\n\n" + email.body_text;
    current.short_description = email.subject;
    current.category = "inquiry";
    current.incident_state = IncidentState.NEW;
    current.notify = 2;
    current.contact_type = "email";

    if (email.body.assign != undefined)
      current.assigned_to = email.body.assign;

    if (email.importance != undefined) {
      if (email.importance.toLowerCase() == "high") {
        current.impact = 1;
        current.urgency = 1;
      }
    }
    current.insert();
  },

  replyEmail: function () {
    gs.include('validators');

    if (current.getTableName() == "incident") {
      current.comments = "reply from: " + email.origemail + "\n\n" + email.body_text;

      if (gs.hasRole("itil")) {
        if (email.body.assign != undefined)
          current.assigned_to = email.body.assign;

        if (email.body.priority != undefined && isNumeric(email.body.priority))
          current.priority = email.body.priority;
      }

      current.update();
    }
  },

  forwardEmail: function () {
    current.caller_id = gs.getUserID();
    current.comments = "forwarded by: " + email.origemail + "\n\n" + email.body_text;
    current.short_description = email.subject;

    current.category = "inquiry";
    current.incident_state = IncidentState.NEW;
    current.notify = 2;
    current.contact_type = "email";

    if (email.body.assign != undefined)
      current.assigned_to = email.body.assign;

    current.insert();
  },

  type: 'InboundActionHelper'
};

Why do this

As you can see the inbound actions are very similar and since servicenow have initilized current with the target table you can use that to determine what other actions you need to do in case you have to something special if email is coming from someone specific, sent to a specific email or contans certain words etc.

All the code is centralized in the script include so it will be much easier to make a general change to all inbound actions just by editing in one place.

Another advantage of this is it will be easier to implement a method of testing inbound actions in test instead of in prod. Many times a customer got several email adresses and the cases created is often assigned based on the email address the email was sent to. This can only truely be tested in prod as there is only one email address and that is redirected to the prod instance. Instead one could implement a test model that simulates email was sent to a specific address be looking at who sent it. So if user A sent the email then treat the email as if it was sent to email1@domain.com if the email was sent bu user B then treat the email as if it was sent o email2@domain.com etc.

Suggestions

Suggestions are welcome. I hope developers will start adopt this idea/model so customes can have a manageable set of inbound actions.

Widget became slow after upgrading from Paris to Quebec

Applies to Quebec and Rome releases.

Yesterday I learned something surprising about Service Portal Widgets. Maybe other developer already know this but it was a surprise to me. Even after I have worked with the new Service Portal since it was released. The problem first appeared after upgrading my PDI to Paris and Rome.

I am building a service portal application with multiple widgets that communicate via angular service. So when a button is clicked in one widget a message is sent and the other widgets acts accordingly for example by showing or hiding information etc.

I started this on a Paris PDI before Quebec was released and everything was fine. Then because of other work I did not continue working on this until after my PDI was upgraded to Rome. Then the problem appeared. Now one (out of 10) widgets suddenly reacted very slowly on messages. The html would not update immediately, it could take up to about one minute.

I spent many hours debugging, commenting code out etc to try to isolate what was causing this. Then finally I found the answer in a stackoverflow question that mentioned $scope.$apply(). I had never heard about this and never seen it used in a OOB widget. It is used in about 10 widgets.

The AngularJS documentation says

$apply() is used to execute an expression in AngularJS from outside of the AngularJS framework.

LINK

So of course I immediately tried to add this simple line of code and voila the widget in question updated fast again. A simple solution that took many hours for me to discover.

Consider following code in the client controller part of a widget. This code subscribes to a service and wait for a specific message. When that message arrives it updates a variable in $scope that is displayed in the html with the {{}} binding syntax.

$(document).ready(function () {

  var removeListener = someService.addListener(function (action) {

    if (action.action == 'doSomething') {
      $scope.field = action.someData;
      $scope.$apply();
    }

  });

});

Without the $scope.$apply() the update of html can take as long as one minute when running on Quebec or later releases. With the line it is updated immediately.

I have confirmed that Quebec introduced changes to the Service Portal that makes this necessary. I confirmed by loading my application on a Paris PDI and ran the widget without the $apply() and it worked as expected with no delays. Then I upgraded the PDI to Quebec and made no changes to the widget but now it was extremely slow. Then I added the $apply() and the widget worked fast again. To re-confirm I upgraded the PDI to Rome and the same would happen.

So the question is when to use $apply. Well the documentation says whenever you update $scope outside a digest. A simple way to find out is just to add it whenever you make updates to $scope. If it is not necessary then you will get an error message in the console.log stating that code is already running in digist so $apply() is not required.

As I mentioned this code is not widely used in OOB Widgets which is why I never ran into this. I usually go check out new widgets when a new release is available to learn about new stuff in the portal.

Containerizing MID Server

From Rome release it is now possible to run MID servers in containers.

This means maintaining and deploying extra MID servers now can be done very fast.

Official documentation HERE.

In the old days setting up multiple MID servers on the same host would require going through the installation multiple times. Now you just build the MID server image one time for the relevant ServiceNow release and push it to a repository such as Docker Hub.

In the following I will describe how I setup an image for running multiple MID servers.

Step one: Download the correct recipe from your ServiceNow instanse. I choose the linux recipe as I could not get the windows recipe to work. It might be an issue only with the first release of Rome.

Step two: Unzip the downloaded zip-file and maybe rename / move to a suitable folder.

Step three: Build. Now begins the fun stuff. My preferred way is to open the folder with Visual Studio Code, then open a terminal window. You can also open a command prompt or powershell window.

Then run the command: docker build . –tag dockerid/midserver-rome

Replace ” dockerid ” with your docker hub id if you want to push the image to the hub.

Building first time will take a while so go grab a cup of coffe and enjoy the fact that your are going to save time setting up mid servers in the future 🙂

Step four: Setup an env file with the parameters to start the MID server so it can connect to your instanse.

Filename: mid1.env

Add following parameters in the file and save:

MID_INSTANCE_URL=https://your-instanse-name.service-now.com
MID_INSTANCE_USERNAME=userid setup with mid server roles
MID_INSTANCE_PASSWORD=password for userid
MID_SERVER_NAME=the name you see inside ServiceNow

Step five: Run. Then you can start the MID server with

docker run –env-file mid1.env dockerid/midserver-rome


After a few minutes you can see the server in serivcenow. You validate it the usual way.

To run a second mid server just copy mid1.env to mid2.env and adjust the server name then run the container agian with this file as env file instead.

Step six: Push to docker hub.

First login with docker login

Then run docker image push dockerid/midserver-rome

And now you can run it on other hosts.