Backlog / deficit 

There are a lot of tests and we have decent functional coverage but even so there's a lot of accumulated issues that we should address.
In rough priority order the main categories for the tests themselves are :

Writing stable automated client tests

Reliable automated GUI testing is hard work since rendering may be

  1. different from platform to platform
  2.  Affected by O/S desktop changes
  3.  Affected by platform specifics such as display resolution, color profiles, keyboard shortcuts
  4.  Affected by timing issues

 etc.

We expect automated tests to be resilient against these and pass reliably 99.99% of the time
Tests which are considered to be "important" should pass with even greater reliability.

Headless tests are those which do nothing that would cause a java.awt.HeadlessException to be thrown if there is no display.

So you can't even create  (never mind show) a Window/Frame/Dialog.

Headful tests are therefore the opposite - they do create a UI on screen. Tests which want to verify D3D/OpenGL/other accelerated pipelines will be headful.

Headless tests run entirely in software mode, eg drawing to a Java-heap allocated BufferedImage.

Tests which ARE headful must specify "@key headful" as a jtreg tag. This is a convention used so that test frameworks can assign tests to appropriate testing resources.

Do not specify that if your test does not need it since headless tests can be run (1) in parallel on a host, (2) on "cloud" / "server" test hosts which don't require any graphics resources.

Many of our automated tests that are headful make use of the java.awt.Robot API to

- deliver mouse and key input events
- grab portions of the screen for comparison against expected rendering.
   These have additional considerations.

The following are some guidelines that should be followed in writing client tests and to ensure stability and reliability of those tests. This will be added to and refined over time.