A Case for Collective UX and User Interface Engineering

So I recently read an article on Huffington Post the other day that said Group Brainstorming is Waste of Time. The author made a good case based on his years of experience in business psychology and research. But I respectfully disagree as the article made some generalizations that at the least, cannot be applied to all group brainstorming activities. I would argue that there are steps that can be taken that can counter the problems outlined in that article. And I have seen them work.

In recent years, I had the pleasure of being on a multiple teams that built UI for awesome enterprise grade products. On one of those of those teams, we simply decided that we will brainstorm the mock-ups of the new features as a team. The operative word here is team. You see, the thinking was (and still is with me), that once functional features are defined, it is understood that the team collectively owns the product. That code and it’s quality are the team’s responsibility. Well, then why stop short of including user experience as part of that? Should developers not care about the target audience of the software? Should developers be agnostic to the idea and understanding of personas that their daily work is catering to?

I firmly believe they should. The era of mutually exclusive/agnostic speciality roles in software development is over. Companies are employing new development processes which are holding the entire engineering team (whole company in some cases) responsible for the product, while the jobs of producing the art (code, mockups, requirements etc.) still remain well defined and individually assigned. For example, engineers do not need to travel every week interviewing the customer base to understand personas. Yes, but they need to understand what those are and what use cases the software serves. Overtime, developers understand the user pains and become more empathetic to UX and avoid surprises on architectural refactors.

Based on that understanding, the team decided to do mock-ups for a new dashboard feature collectively. The UX Director had done a great job in laying out the personas, users and high level stories. She had also coached us on most UX 101. For a side project, I personally learned how to conduct KJ Sessions and Usability Test sessions. Armed with just white paper and pencils, all of us decided to do mock-ups of what the page would look like, what and how the data would be presented, state transitions, user messages i.e. the whole nine yards. After a round, each one of us presented our mock-ups and explained the page.

Remarkably, we were not deviating from each others design drastically. As it turned out, naturally, the final design consisted of at least one idea from every engineer. I believe this naturally happened because of the experience they brought to the table. All along UX Director was there to keep us on track and do a sanity check of the design. She was not there to dictate the design that she would have liked or sideline ideas that were better than hers. She was there to make sure that the design was staying true to the end users and we avoided violations of basic UX principles. She was integral part of this team exercise.

Now, in the context of that little story, which I have seen successfully repeated multiple times, let us look at the four explanations in the brainstorming article and see how they may not hold water.

1. Social Loafing: That did not simply apply in the case I cited, because the output demanded from the team members was outlined well. Goal was team based but tasks were not. If engineers shy away from having individual tasks in a team environment, then you have other problems at large.

2. Social Anxiety: Not that I am face reader, but yes, you can see a team member be dejected sometimes. That is where a team leader can step in as a facilitator, make sure that opinions will not be suppressed and all ideas will be discussed thoroughly. The fact that each member must come up with something (anything) is good enough. Confidence will follow. By any measure, this did not happen in our exercise.

3. Regression to the Mean: Good point of the author. But, the way we avoided this in our case is that more talented people (or let us say, more experienced UI engineers) did their own thing while not worrying about what the junior engineers were doing. Again, the expert in the room (the UX Director) can strike a balance here, by calling out the good parts of the designs by junior engineers.

4. Production Blocking: Yes, but then again if the group size is too high to begin with, you are setting yourself up for inefficiencies. Companies often fail to get consensus on trivial issues in large groups, let alone brainstorming. Software teams need to be small and lean. The team was sized to 4 in my story. And everyone’s ideas were heard and accepted to a great extend because the group size was small and we were able to time box it.

So in short, group brainstorming does not have to be a waste of time if is is planned right, starts with a reasonable group size and is well facilitated. I believe starting off with this approach, in general, will help. But again, repeat what works and change what does not.

How To Test a Ruby Gem That Wraps a JavaScript Library

In order to help a teammate, I recently gave it shot writing a public Ruby Gem, “angular-translate-rails“. For the newbies, the idea is that if you are working inside a Ruby on Rails application and you would like to include an asset. That is done by acquiring it as a Ruby Gem aka a glorified dependency. The main advantage is that you do not have to worry about maintaining it in your own code base, including its licenses. If you want to understand more about gems and how to create them, this is a great resource.

Now, the gem I created started out basically as a wrapper of a JavaScript library called “angular-translate”. The documentation on creating the gem is helpful and building it was easy, but it was the testing that took most of my time. So let me speak to that mainly.

A spec to test the internals of the library are really not what was needed it. That seem to have been done well in the library itself. What I wanted to test was that the gem will do what it is supposed to do, i.e. deliver the asset to the application in the right place when included.

I am not going to go into every detail of the test but here is the high level overview.

I see mainly two ways to test the gem like this, i.e. 3rd party JavaScript library wrapper.

Scenario A. Create a vanilla rails application and make it use the gem. That is it! The steps basically include:

$ rails new angular-translate-gem-test-app
$ cd angular-translate-gem-test-app
$ echo 'gem “angular-translate-rails", :path => “/Path To Gem Directory"' >> Gemfile
$ bundle install
$ echo '//= require angular-translate' >> app/assets/javascripts/application.js
$ rails server &
$ curl http://localhost:3000/assets/angular-translate.js
$ fg
<ctrl-c>

Scenario B. Here, if you created the gem as a standard Ruby module with default scaffolding, you will see a test directory with a dummy application. Think of this as a bundled application serving as part of an integration test that tries to mimic using the gem.

The dummy application is defined in application.rb with defaults. It has all the bolts and nuts it needs i.e. environments and initializers.

The are two main tests:

1. resource_test.rb

This is the integration test which will run the dummy app on the fly, request the library and check if the app serves the asset with correct version.

This line here is the heart of this test.

test 'can access angular-translate' do
...get '/assets/angular-translate.js'
...assert_response :success
end

We simply request the library as if we were in the browser and check the response if was a success.

2. angular-translate-rails_test.rb

This tests if the gem being offered as a Rails module or not.

These lines here are the the crux of this test:

test "truth" do
...assert_kind_of Module, AngularTranslate::Rails
end

Usually, going with Scenario A should be more than enough to test the integrity of the gem. But personally as I learn more about the Rails world, which has way to many moving parts, it infuses more confidence with in me and perhaps with other developers using this gem that a test is bundled within the gem.

Here is my gem repo in case you feel like forking and playing with it.

7 Things Engineers Can Do On Their Commute

I commute. About 120 minutes in the commuter rail, daily. Nothing exciting there. A lot of you do the same. But its the activities that I try to focus on during my commute, that has made me jot down these thoughts.

Leonardo da Vinci once said

Time stays long enough for anyone who will use it.

While I agree with Mr. da Vinci, the practical question is how to use it!

I confess, I still have a hard time occasionally trying to do something productive while sitting in the train, because the environment naturally does not lend itself to be “productive”. When I first starting commuting, I usually wanted to catch some Zs or scroll through Facebook incessantly. But these days I am mostly doing one of the following, which I recommend.

Going to Work

1. Start with clear desk

For most of us, our laptop is our mobile desk. Start by clearing the outbox/inbox rack by answering emails or writing thoughts.

I recently read through 99u book and one of the prime suggestions by Scott Belsky is that we need to set some time aside for our creative work. Clearing your desk sets you up for it, otherwise it piles up and you have creative work debt.

2. Make a To-Do list.

Over the years, this silly little exercise has proven to be the best investment in a day’s good work. I learned to use the Get Things Done (GTD) method and I am currently using Trello for the filing system, but for time management, I use the Stephen Cover’s First Things First approach

This list may include work as well as chores that may you need to do AFTER work. Our day doesn’t end on the way back home and not taking notice of things to do overflows to next day.

3. Code Review

Doing code or peer reviews during your creative time can be drag. Commute is a great block of time to  put in your important contributions. Also, you always learn a thing or two from the teammates.

Going to Work/Going Home.

The following may be done both during morning and evening commutes:

4. Write blogs.

Yes. My favorite activity in recent months! Chances are we are introverted as engineers. But we have a lot of thoughts and we want to share. Just logging thoughts, ideas, good-to-know things, things we have learned etc. really helps in a fulfilling and therapeutic way. Even if you are not ready to share your thoughts with the rest of us, just creating a notebook for it in Evernote will do wonders.

5. Books / Articles

If you are lucky to have a book club at your work, like I do, this option is great. Awesome engineers get together every Wednesday and discuss chapters they cover incrementally as a group. Even if you can’t participate consistently, trying to keep a pace with them while commuting is great if you have the e-Book version of the book handy.

You can also catch up on reading articles that you may have clipped on Evernote. Chances are you are reading this in the train!

6. Sketching

I got this book on basic sketching from Barnes and Nobels. Not only it has helped me sharpen my sketching skills that I can show off, but it has also helped relax me after a day’s work, especially after he one that has had multiple tough meetings, if you catch my drift.

7. Focussed Breathing Meditation
This is the most useful and rewarding activity you can do while commuting. Even with a train full of people around you. The technique I learned is part of a bigger technique called Vipassana. It helps relax and sharpen the mind. As an engineer, a good sleep and a fresh mind next morning without a lot of clutter are a great things to have.

On a side note, here are also a couple of things I would advice against for the evening commute, based on my own experiences.

Avoid writing code. It is end of business, so give your mind a rest. You are getting ready to crash so unwind. The quality of the code is not top notch when you are tired. Reading code at this time can also cause frustration if you find yourself asking “who the heck wrote this”. 🙂

Play 2048 and Luminosity. I found these two games (exercises rather) to be cool mind sharpening tools. Think of it as sharpening the knife at the end of the days work in the kitchen. These are much better than some of the arcade and first person shooter games that instigate anxiety. Sudoku is up there too.

FWIW, I wrote this on my way back home yesterday 🙂

Cheers!

P.S I don’t recommend doing most of these things if you drive to work. 🙂

Fun with Arduino…

So check this out!

B-PrGzVCQAAzEVw

Yes! That is a live temperature sensor with an LCD display.

OK. Let me rewind. We had an Arduino workshop today at work. We were given starter kits, an expert walked through projects with us and explained the basics. After a few exercises in increasing difficulty, we were let loose to do our “own thing”. Since, I had only 40 minutes, I just combined a TMP36 Temp Sensor temperature sensor, a 16×2 White on Black LCD and whollotta connectors on the breadboard. Rest of the magic was in C code. Yes C! My long lost love. How awesome it is that you can include a library in in a C program and you can interact with a LCD.

The following code reads the temperature as voltage, where pin is the Analog input A0 on the red spark board:

...
voltage = (analogRead(pin) * 0.004882814);

degreesC = (voltage - 0.5) * 100.0;
degreesF = degreesC * (9.0/5.0) + 32.0;
...

and these lines print to LCD:

...
lcd.begin(16, 2);
...
lcd.print(degreesF);
lcd.print(&amp;quot;F&amp;quot;);
...

How (low level) cool is that!

The code magic is sprinkled with the Arduino IED (1.0.5), where you add the C program and upload it to the device. You also need the FTID Drivers

Here is my GitHub repo containing the INO file that will do the software magic

Going to give Raspberry Pi next!

How to Have Efficient Sprint Planning Meetings

Too many of us have been burnt by this. There we are again. Getting together every other Monday, usually devoid of a clear agenda, trying to “plan” a sprint. We start OK. But before you know it, we are all over the place. We argue in a civilized manner on how this story is not clear, or that the story needs to be discussed a lot more for it to be even estimated. Confusion creeps in. Soon the engineers take over “planning” part of the meeting and morph it into a “clarification” frenzy. The Product Owner (PO) is overwhelmed as technical terms are thrown in the air and on-the-fly technical solutions are spit-balled. Sadly, sometimes legitimate good solutions get lost in translation and noise. The Scrum Master or team lead brings things under control, but there is lost time.

Then we bandage this process by following up with emails throwing in definitions of “Sprint Planning”. The Scrum Master sends out his/her emails on how the process is supposed to be. The next couple of meetings are acceptable but we soon relapse as a team and go back to our old habits. The root cause has still not been addressed.

Often, the main problem is that some engineers are not confident about taking on a story until they have talked through it “technically”. Doing so gives them an idea whether they will slug it out or sail through the work planned. And no one wants to slug it out! Some engineers would rather do it while sitting in their bullpen, yet others would like a discussion/debate around it because they would rather go with a team consensus. Sprint Planning is way too critical a meeting to be used for this. But it keeps happening because the team is not organized.

In the end, the lost productivity is unfortunate and costly. But the problem is not that complex and can be solved by a small tweak. Rather than misusing the planning meeting, focusing on definitions or “supposed-to-be”s or process reminders, how about preparing for its success by setting up the stage for this meeting? What if the planning meeting, in the strictest sense was only about… planning!! Stories are ready, tasks are cut out, story points are estimated. All we do is finalize on the Definition of Done by having a deployment as one of the goals. Kapish! Think about it! Even doing that efficiently sprint to sprint, requires discipline and focus. It becomes even more challenging when agile teams are sized abnormally.

So again, how do we practically get there? Well, what has worked for me in the past to a good extent is to have “Grooming Huddles“. Remember, the grooming that POs are supposed to do? Well, make it a mini-summit and developer focused. We start with what POs have prioritized. Then we look at well-defined specs and mock ups of the stories in that order and see if anything is missing. And no; not all questions have to be answered in this meeting. High-level steps that usually do the job are:

  1. Take a pass at a user story.
  2. Read through the mockups and specifications.
  3. Deliberately throw out questions on 4 levels:
    •    Front-end work
    •    Back-end work
    •    Dependencies
    •    Unknowns/research

Dependencies and unknowns usually surface when you talk about the work to do, i.e. a and b. And it is perfectly OK to have dependencies and unknowns. I actually get nervous when I do not see them on non-trivial stories. Having them is a signal that we have thought out the work well enough, if not perfectly.

So now this has now become a developer’s meeting. They will now start owning it and looking forward to it. And the exit criteria of meeting have to be well thought out stories, from high-mid level implementation point of view that all engineers are comfortable with. But discussing pseudocode or super fine development details will be overkill. From another perspective think of this as an extension of Planning Poker sessions. Too many engineers resist it in its purest form without telescoping into development details. So why not actually delve into development details.

Let us do a dry run of this. Say you have a screen that encompasses a feature. The screen has two tabs. Each tab is a story in itself, of course, but in each story tasks are agreed to as the following as high level tasks:

  • Implement REST API for Tab A
  • Implement HTML and Styling for Tab A
  • Implement Angular module, controllers, services etc.

As soon as you think this is going to be easy, a developer complicates things by mentioning that we have to use the style sheets supplied by our UX folks as per the Human Interaction Guidelines. Well, it turns out that this is the first time this is being done! So that right there is a dependency AND an unknown, thus having a bearing on estimates. But as a team we are equipped to deal with these. I have noticed that as developers, over time, we get tuned to each other and estimate the scope well when we do these grooming meetings as a team. Statements like “yeah I can help you with that”, “let us talk to that other team, they did it last month” or “I can put you in touch with John to get you access”, are signs that we are pushing through as a unit and minimize the risks of dependencies and unknowns.

So coming into planning meetings now, we have discussed stories that are not big question marks which often make developers nervous and give them an excuse to call these stories “ambiguous”.  Having hourly estimates is a bonus when starting the planning. But days estimates should be the goal. Overtime, I have also noticed that grooming meetings consistently end with all stories estimated. All that is left to do now is to create a sprint and target the stories that add a meaningful feature set. Sprint planning ends in 30 minutes!

I believe this approach can compliment all agile methodologies that have the elements of time boxing and planning. Ultimately the team has to be conscious of the fact they own the process. it is there to serve them and make agile development more disciplined.

How to enable JavaScript Code Coverage When Using Rails, CircleCI, AngularJS, Jasmine and Teaspoon

I am a big fan of code coverage.  Enough has been written about the need of the same. It helps raise confidence within development teams, solidifies the unit testing process and alleviates some risks that lurk when there is not a plethora of quality assurance resources. When this is coupled with good peer code review and inspection practices that come with say Git Pull Requests or Atlassian Crucible, you can be certain to a good degree that the funtionality at least at the unit level is not brittle. Even though it is metrics vs inspections, I find that investment in keeping a track of code coverage metrics is better than having Fagan Inspections or reading code line by line with the whole team sitting around a table while taking multiple coffee breaks.

I have played with Emma and Cobertura earlier. But, recently I found myself in a spot where I needed to review code coverage metrics for JavaScript assets within a Rails based app. This may have been done before, but the combination seemed unique to me. The app is a heavy Ruby on Rails, JavaScript is AngularJS based, unit tests are defined using Jasmine and we are run using Teaspoon as the test runner. Builds are deployed on CircleCI. Coverage needs to run locally and as part of the continuous integration builds. Unless your app setup is completely non-standard, the following steps are what you would need to enable code coverage. Of course you may be using a different Javascript library or Karma instead of Teaspoon.

Note that I am working with a sexy MacbookPro. So if you are using a Windows machine (bless your heart) the commands may not be that different. Please feel free to add those as comments if you end up trying this on Windows.

Now, let us first walk through the steps you will need to get going locally.

1. Install Node.JS. We need it to run Istanbul, the engine that will actually instrument the code.

2. Teaspoon depends on Istanbul for coverage. Install it as:

npm install -g istanbul

3. Edit the Gemfile of your Rails app and enable the phantomjs gem as :

group :development, :test do
  ...
  gem 'phantomjs'
  ...
end

FYI…phantomjs is a headless WebKit browser.

4. Now run bundle install to make sure phantomjs is installed.

bundle exec teaspoon

5. Edit teaspoon_env.rb and enable at least these lines in the coverage area:

#...
config.use_coverage = nil
config.coverage do |coverage|
  coverage.reports = [&amp;quot;html&amp;quot;]
  #Which coverage reports Istanbul should generate...

6. Now, in order to test and generate a default configuration report locally, do:

bundle exec teaspoon --coverage=default

Report is generated under coverage folder in folder, by the name of the configuration you specified. There you will find index.html, open it and viola!

ksarneja-coverage

Now, to get the same effect going for CircleCI build (that is cloud baby!) you first have to tell the CircleCI environment that coverage is on. This is done through circle.yml

So edit that file and add these following line under dependencies / pre section. This will install Istanbul as part of the CI build, if it has not been done on a higher level.

- npm install -g istanbul

Also in test / post section, enable coverage and then copy the artifacts to the default CIRCLE artifacts location. Note that this may be different for you. No point make this a parallel task as teaspoon tests (if well written) are execute very quickly. But note that copy should be done after the Teaspoon tests have run. In other words, the commands are sequential.

- bundle exec teaspoon --coverage=default
- cp -r ./coverage/default $CIRCLE_ARTIFACTS

Run the build. When it has run successfully, check the Artifacts tab and scroll down for index.html.

ksarneja-ciartificat_1

More configuration options and settings can be set in teaspoon_env.rb. You may check out the teaspoon gem site for more settings.

Cheers!

Regression Dandelions Be Gone…

Do you like dandelions? I hate them. You mow or pluck a few, yet many others crop up. The main reason that happens is because chances are that you left some behind, they matured and spread their seeds. What needs to be done is to remove them from the roots  using a weed plucker. This way you will reduce the occurrence of weeds and in turn dandelions.  Of course this is hard 🙂

Regression issues stemming from browser based user interface code are just like dandelions. Our code, however well written it may be, using best styles, conventions and readability best practices etc., still has the potential of having such weeds creep in. You go and fix one thing and it breaks something else in another place. This is especially true in cases where development teams are fast paced and work is JavaScript intensive, but strategies to check regressions at development-time are either weak or missing complete. They are agile and there is pressure to get features in quick. Refractors are applauded. Code is constantly shifting. And while all of this is happening, risk of regression keeps on increasing.

As I grew into JavaScript over the years (I now work with AngularJS), I also found myself dealing with this problem more often. But then, the more I relied on unit tests while focusing more on end user instead of just the tests, the fewer regressions I witnessed.

See, it is a very simple strategy and many have vouched for it. Say you want to release 1.1. Changes and improvements are abound. You touch JavaScript code in many places. You update the unit tests in place (God be with you if you do not) to account for new code and they pass. Then QA starts manual and final regression testing, mainly running their awesome test suites full of use cases and viola! They find and log stupid UI issues. Buttons not clicking, DIV tags not populating, DOM updates wrongly, browser console errors etc.; all things that were working before you made the changes. This happens more often because like a good engineer you did not repeat yourself and wrote a componentized code, but missed out on good refactoring and those components are bloated with options that now ,work in different ways.

So what did you miss? Well, leaving the argument about simple and stupid components aside, you missed covering for those new or updated use cases. Now, you have broken dependencies and chain reactions there in! Bloated UI code breaking is but a symptom. You were not only supposed to update the tests, but also make sure the tests were representative of the use cases and workflows, i.e. mainly from the point of user flows. Then complete the tests, rerun the suites to seek out regressions. This is more or less  Behaviour Driven Development or BDD. This cycle has to be followed ruthlessly every time you touch the code . This way you will easily be able to catch behaviors that broke due to code changes. Silly unit test failures should be expected but the idea is to catch them way before integration tests or QA manual testing cycle comes into play. On top of this, you make sure you have good coverage numbers making sure you have hit a lot of the code branches with your units tests. This builds confidence within the team.

I have paid attention to unit testing a lot over the years, but BDD was not on my radar for the longest time. We just look at coverage numbers and tests on new code, but often get lazy focusing on the bigger picture. The crux is that when you add features to your product, it usually adds to and/or affects user behaviors. Someone has to keep an eye on that, from engineering point of view and make sure the tests reflect that, with every release. Someone has to play the role of, what I term as “Night Watchman!”.

So here are some practical mini-strategies that have worked well in my experience:

  1. If cost and time permits, use every development iteration as an opportunity to refactor UI code into smaller and smaller manageable chunks.
  2. Assuming you are Agile, make it a point that in every sprint, a team member steps up to become the night watchman. Rotation of this role helps team member, especially junior ones, to become empathetic towards end-users, QA team and their own time!
  3. Bring the developers on the same page so that they write unit tests to be as representative of a user task as possible.
  4. The night watchman works with Product Owner and/or just references well written QA test suites to code review the unit tests.
  5. Enable JavaScript code coverage and maintain high but achievable and acceptable metrics on classes, functions and lines.
  6. Track the regression metrics over time and determine what code is usually more susceptible to breaks. That code is your prime candidate for your next refactor.
  7. Tag team and review each others test suites just like you would review code. Over time these tests become living artifacts.

Now, there are other theories and tools that help you follow these strategies. But I have found that BDD to be the one that will get you closest to how the user thinks and performs his/her tasks. If our tests are written to compliment that, we have a better chance of clamping down regressions with successive development iterations. This also adds speed when engineering effort in iterations is milder. I have been using Jasmine for writing BDD with my AngularJS code. Give it a try, if you have not already!

Smooth SAILing with AIE 4.0 (For Attivio)

I wrote this while my time at Attivio. Sharing this here as a contribution since the main website may have revamped. I am not representing Attivio here or marketing its products.

Steve Jobs once said, “Some people think design means how it looks but of course, if you dig deeper it’s really how it works.” So true.

However, Attivio believes that AIE’s new Search and Analytics Interactive Layer (SAIL), excels at both of those criterion. In Active Intelligence Engine™ (AIE™) 4.0, SAIL replaces the older Simple Search User Interface (SSUI) module. SAIL makes it easy to build an AIE search interface from scratch, and thus unlock the potential of your structured and unstructured data.

During development, the SAIL team identified the key personas that are responsible for building such interfaces and targeted SAIL to meet their needs. For example, solution engineers with customers and partners can leverage the configurability aspect of SAIL to meet the following needs:

  • Easily configure the user interface, including the skin
  • Set search parameters
  • Change visualizations (such as tag clouds or maps)
  • Minimize code changes

SAIL lets such engineers achieve these critical milestones easily, and thus quickly create demos that show customers the value of AIE.

More technical users are driven by the need to be able to come up-to-speed quickly. They have requirements such as:

  • Self-explanatory user interface elements and backend
  • Ease of use (i.e. with codebase and changing default aspects)
  • Straightforward coding

Meeting these requirements helps such users quickly build search interfaces that they can tailor for specific functional requirements by extending code, adding widgets, changing branding, etc.

And finally, non-technical users may just want to use SAIL to attain valuable insights on the data.

SAIL is designed and developed to specifically help these and various other users successfully achieve their goals, and create new and happy customers along the way.

629cdb63-7417-4182-a6a0-de32845fc9a6

SAIL is a minimally-configured, yet ready-to-run, web application packaged within AIE. You can also run SAIL in a non-AIE web container such as Tomcat. SAIL provides an interactive learning platform for exploring AIE Search APIs (Application Programming Interface).

You can also use SAIL to rapidly prototype an AIE application on top of any data set by customizing SAIL’s boilerplate code. This design reaps benefits quickly. SAIL features componentized functionalities in individual dynamic templates, Spring MVC, JavaScript, CSS3 (Cascading Style Sheets) and HTML5. Therefore, adding or customizing features where compilation isn’t needed is as simple as changing a couple of files without restarting AIE.

Coupled with a contemporary look and feel, SAIL demonstrates the many powerful features that AIE offers. SAIL’s landing page provides a dashboard with facets served by Facet Finder™. Upon searching with terms, SAIL provides a complete view that offers following features and much more:

  • A results view
  • AIE Simple and Advanced Query Languages
  • Facets
  • Auto Complete
  • Field Expressions
  • Image thumbnails and Previews
  • Content Security
  • Relevancy Tuning
  • Query Time Join
  • Highlighting
  • Sorting
  • Spell Suggestions
  • Synonym Expansion

c4073263-5a3f-4969-a233-2679e88d5ad3

Additionally, SAIL makes some AIE features available as visualizations, including a Tag Cloud on Keyphrases, Maps with Geo Search, and Range Facets as a Time Series chart. SAIL is also integrated with Business Center, which lets you control the user’s search experience by boosting and blocking documents, while pushing promotions to users. Moreover, Scope Search is enabled for SAIL by default. You can easily tweak these functionalities from SAIL’s preferences panel. SAIL’s capabilities lets you instantly appreciate the Unified Information Access into your data silos powered by AIE.

SAIL is available as an optional AIE module. Once you try it, we think you’ll see SAIL’s potential for enabling your success.

Happy SAILing!

AIE 3.5: Event Driven Monitoring & Management (For Attivio Inc)

I wrote this while my time at Attivio. Sharing this here as a contribution since the main website may have revamped. I am not representing Attivio here or marketing its products.

The release of Active Intelligence Engine® (AIE®) Version 3.5 introduces an important set of new functions. Besides introducing new modules like Ontologies; features such as index rollback, image thumbnailing, and document preview and many essential enhancements including SQL, we have improved built-in support for monitoring the performance and health of your AIE infrastructure.

Version 3.5 offers a new user experience for those responsible for administration of AIE, defining the steps they need to take in order to act upon the areas in need of prompt action and quick turnaround. Monitoring and diagnosing AIE now involves little more than checking a screen or two in the AIE Administrator Console.

The key new administration features in AIE 3.5 include:

  • System Status — a one stop view of AIE system health
  • Performance Monitor — a rich visualization of key AIE performance metrics
  • System Events — a filterable collection of all system events emanating from AIE

Each view offers visual insights into what’s happening in AIE at any given moment, simplifying system administration. As a result, the time to discover and resolve issues is shortened, and the user experience is improved.

A new System Health Banner, which appears on every screen in the Administrator Console, summarizes the holistic picture of AIE health for administrators. The banner complements the System Status view, which — as the default view in the console — breaks AIE’s status down into individual areas: Node Health, Connector Status, Important Events, Index Status and System Performance.

This view allows administrators to easily connect the dots and essentially helps them answer the question “Is there a problem with my system right now that needs my attention?.” They are able to consume larger sets of information quickly and target the parts that may be relevant without visiting respective detailed views in other parts of the Admin Console.

This Administrator Console presents you with unified answers rather than forcing you to sort through lengthy tables of numbers as we see in legacy management user interfaces. For instance, with the Connectors view, users can now see connectors that are not running as expected, scan the list of recent events in the adjacent Important Events view and start to compile a set of root causes behind any connector issues without ever leaving the System Status view. Events at a warning or fatal level can be investigated to quickly ascertain what specific AIE instances or connectors were affected and why. The Event Acknowledgement Dialog solves the dual purpose of providing all pertinent information on the event and letting the user mark the event as being seen and/or resolved.

2282c03e-e958-4bdb-a54e-52525604dc68

Index Status not only describes a selected index’s configuration, but also related statistics. Looking at this information, one can determine for instance, if the index size has doubled within a day, indicating a potential performance spike. The Performance view’s graphical visualization of crucial metrics lets you pinpoint bottlenecks that may warrant immediate attention.

Consider a scenario where, as an administrator, you notice that the System Health Banner is reporting a warning event that occurred in the past 24 hours. You go over to the System Status view and find that an event occurred pointing to low disk space on a node. If unchecked, this might become a factor in creating search performance and node health related problems. You open the Event Details dialog and dig deeper into the event description to find more information about the node. At this point, you can make an informed decision on adding more disk space and taking additional measures.

On the new Performance Monitor view, we provide even deeper insight and flexibility by giving you more than 700 pre-built metrics to graph. By creating multiple graphs, grouping specific metrics together onto one graph and zooming and panning, while having system events superimposed on the timeline, you can chart out a more informed investigation. Unlike the System Status view, which is intended to push summarized, timely and relevant information to you, the Performance Monitor helps you understand specific aspects of the system.

For instance, let’s say that users of an application using AIE reported issues with sluggish search functionality today around noon. One could quickly create a graph and add specific metrics like:

  • Uptime across all nodes
  • memoryPct — i.e. percentage of memory being used
  • os.memory.free — i.e. free memory in the OS
  • nodeCPU across nodes — the CPU usage across all nodes

Looking at these metrics in parallel over a timeframe, one can zero in on the bottleneck that was causing search to be slow.

For our customers, these are game changing capabilities. As Fahim Siddiqui, Chief Product Officer, IntraLinks states, “AIE’s event-driven system and performance management have set a new standard of excellence that means we are able to proactively identify and resolve issues. For us, Attivio’s Active Intelligence Engine is not just a technology; it’s a key piece of our overall commitment to providing the best possible experience for our customers.”

As we move forward, we could not be more excited. We have some cool new features lined up that will showcase Attivio’s innovative user centric approach, not only in managing AIE but also in effectively putting the power of unified information access to work in your organization.