Age | Commit message (Collapse) | Author |
|
On Python 3.x this has the side-effect of returning a str instead of a bytes.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
It is no longer supported in Python 3.x, but unpacking the tuple parameter
inside the function works just fine on both old and new Python versions.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
The old syntax is no longer supported in Python 3.x.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
These warnings have become fatal errors in Python 3.x.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
|
|
Move the re.compile into the core.Enivironment constructor, which
reduces code duplication. It also allows us to pass environment data on
initilization of the object, rather that having edit it's attributes
individually.
V2: - Does not remove deprecated options, only marks them as such
V3: - Fixes deperecated warning for tests from V2 always being triggered
Signed-off-by: Dylan Baker <baker.dylan.c@gmail.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
|
|
Was getting error about an unknown keyword 'wglinfo'.
Signed-off-by: José Fonseca <jfonseca@vmware.com>
|
|
Subtests allow a test to produce more than one result. This is useful
for tests where it may be convenient to combine several somewhat
unrelated tests into a single program. For example, a cl test like:
kernel void test(TYPE *out, TYPE *in) {
out[0] = some_builtin_function(in[0], in[1]);
}
That uses only one kernel, but recompiles it several times with different
definitions of the macro 'TYPE' in order to test the builtin with all
the possible types.
To take advantage of the subtest feature, programs should output one
PIGLIT line per subtest as in the following example:
PIGLIT:subtest {'testA' : 'pass'}
PIGLIT:subtest {'testB' : 'fail'}
Where testA and testB are the name of the subtests.
In the result summary, this will be displayed as:
TestName 1/2
testA pass
testB fail
v2:
- Print one line for each subtest rather than printing them all
together.
Acked-by: Chad Versace <chad.versace@linux.intel.com>
|
|
At this point, Glean wasn't writing anything interesting anyway;
it was just clutter.
Since there's no need to specify a results directory on the command line
any longer, this patch also removes the -r option, making "run tests"
the default action.
This also allows us to simplify the Piglit runner framework a little:
it no longer has to pass around the results directory just to pass to
Glean.
|
|
As we do glxinfo on Linux. Note that if wglinfo.exe isn't found
you'll see a line like this in the results file:
"wglinfo": "Failed to run wglinfo",
Reviewed-by: José Fonseca <jfonseca@vmware.com>
|
|
check_for_skip_scenario allows a test to be marked as 'always skip'
Currently it doesn't mark any test for skipping.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
|
|
Otherwise, if the user forgets to set PIGLIT_SOURCE_DIR himself, then
tests that use piglit_source_dir() will fail.
Signed-off-by: Chad Versace <chad.versace@linux.intel.com>
|
|
Before scheduling or running the tests, run() prepared the final list of
tests to run: first, flatten the Group() hierarchy; second, filter out
tests based on the -t and -x options.
It makes sense to have this as a helper function. Doing so will also
enable other utilities that (for example) print a list of tests that
would be run and their command line programs.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
|
|
Valgrind testing is useful, but really should be done as a separate
exercise from the usual regression testing, as it takes way too long.
Rather than including it by default in all.tests and making people
exclude it with the -x valgrind option (or by using quick.tests), it
makes sense to explicitly request valgrind testing with --valgrind.
To perform valgrind testing:
$ piglit-run.py --valgrind <options> tests/quick.tests results/vg-1
The ExecTest class now handles Valgrind wrapping natively, rather than
relying on the tests/valgrind-test/valgrind-test shell script wrapper.
This provides a huge benefit: we can leverage the interpretResult()
function to make it work properly for any subclass of ExecTest. The
old shell script only worked for PlainExecTest (piglit) and GleanTest.
Another small benefit is that you can now use --valgrind with any test
profile (such as quick.tests). Also, you can use all.tests without
having to remember to specify "-x valgrind".
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
GPUs like to hang, especially when barraged with lots of mean Piglit
tests. Usually this results in the poor developer having to figure out
what test hung, blacklist it via -x, and start the whole test run over.
This can waste a huge amount of time, especially when many tests hang.
This patch adds the ability to resume a Piglit run where you left off.
The workflow is:
$ piglit-run.py -t foo tests/quick.tests results/foobar-1
<interrupt the test run somehow>
$ piglit-run.py -r -x bad-test results/foobar-1
To accomplish this, piglit-run.py now stores the test profile
(quick.tests) and -t/-x options in the JSON results file so it can tell
what you were originally running. When run with the --resume option, it
re-reads the results file to obtain this information (repairing broken
JSON if necessary), rewrites the existing results, and runs any
remaining tests.
WARNING:
Results files produced after this commit are incompatible with older
piglit-summary-html.py (due to the extra "option" section.)
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
When resuming an interrupted piglit run, we'll want to output both
existing and new results into the same 'tests' section. Since
TestProfile.run only handles newly run tests, it can't open/close the
JSON dictionary.
So, move it to the caller in piglit-run.py.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
Commit 4fbe147b5817b2ba0fe44fe9db96a2d05288aee introduced a regression
where not specifying a -t option would result in all tests being
excluded.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
del dict[key] will raise a KeyError if the key isn't actually in the
dictionary. This can happen if, say, a test didn't match the -t option
and subsequently matched a -x option.
Also, deleting dictionary items while iterating over them is apparently
not safe in Python 3, so we may as well just create a new dictionary
with only the entries we want.
This is simpler anyway.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
It was polite to print a message shortly after the conversion, but at
this point, it's basically dead code.
|
|
all.tests includes a series of regular expressions to discard driver
chatter that Piglit shouldn't consider a warning. Unfortunately, it
got copy and pasted to a few more files.
Move it back into one place---in core. While we're at it, use `map' to
avoid having to write Test.ignoreErrors.append(re.compile(...)) every
time.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Because seriously, who wants to sleep between tests? Under normal
circumstances, this just pointlessly slows test runs down. During
platform bring-up, or when debugging flushing issues, it can be useful,
but it's just as easy to hack in a sleep(0.2) call yourself.
|
|
It doesn't make much sense for Tests to filter themselves out based on
global options just before they're about to run; they should be filtered
out ahead of time. This avoids scheduling a bunch of useless tests.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
Test.doRun was serving two purposes:
1. Queueing up a concurrent test for later execution.
2. Running a test immediately (in the current thread).
This behavior was controlled by an effectively global parameter,
Environment.run_concurrent, which could easily be confused with
Test.runConcurrent (whether the test is safe to run concurrently) and
Environment.concurrent (whether to use multiple threads/the -c flag).
Split it into two functions for clarity.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
In the past, Piglit has stored test profiles as a hierarchy of Test and
Group objects: TestProfile contained a top-level Group, and each Group
could contain either Tests or Groups. To refer to a test, one could do:
tests['spec']['glsl-1.30']['preprocessor']['compiler']['keywords']['void.frag']
A second way of referring to tests is via their "fully qualified name",
using slashes to delimit groups. The above example would be:
tests['spec/glsl-1.30/preprocessor/compiler/keywords/void.frag']
This fully qualified name is used for regular expression matching
(piglit-run.py's -t and -x options). Since the advent of JSON, it's
also what gets stored in the results file.
Using the Group hierarchy is both complicated and inconvenient:
1. When running tests, we have to construct the fully qualified
name in order to write it to the JSON (rather than just having it).
2. Adding the ability to "resume" a test run by re-reading the JSON
results file and skipping tests that have already completed would
require the opposite conversion (qualified name -> hierarchy).
3. Every Group has to be manually instantiated and added to the parent
group (rather than just being created as necessary).
4. The Group class is actually just a dumb wrapper around Python's dict
class, but with extra inconvenience.
This patch is an incremental step toward dropping the Group hierarchy.
It converts TestProfile.run() to use a simple dictionary where the keys
are fully qualified test names and the values are the Test objects. To
avoid having to convert all.tests right away, it converts the Group
hierarchy into this data structure before running.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
I think that no Python file in framework/ or tests/ should have shebang, and
tests/shaders/gen_equal_tests.py shouldn't have the executable bit set.
chad: I modified Matěj's patch to retain the shebang for
glsl_parser_test.py. That file can be ran as a standalone script.
Signed-off-by: Chad Versace <chad@chad-versace.us>
Signed-off-by: Matěj Cepl <mcepl@redhat.com>
|
|
The main purpose of this patch is to make piglit independent of the
current working directory, so it is possible to package piglit as a RPM
package (with binaries symlinked to /usr/bin, most of the files in
read-only /usr/lib*/piglit directory, and results somewhere else).
So it is now possible to run
$ piglit-run tests/quick-driver.tests /tmp/piglit
and then with this command
$ piglit-summary-html --overwrite /tmp/piglit/results /tmp/piglit/main
generate a report in /tmp/piglit/results/index.html & al.
Signed-off-by: Matěj Cepl <mcepl@redhat.com>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
|
|
Previously, there would be bursts of concurrent tests as we ran into a
series of them while walking the list of tests. If we get to the
point of the serial tests not being the limiting factor, it might have
impacted our total time.
Acked-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
-c bool, --concurrent=bool Enable/disable concurrent test runs. Valid
option values are: 0, 1, on, off. (default: on)
CC: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Chad Versace <chad@chad-versace.us>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
|
|
If the JSON result file was not closed properly, perhaps due a system
crash during a test run, then TestrunResult.parseFile will attempt to
repair the file before parsing it.
The repair is performed on a string buffer, and the result file is never
written to. This allows the result file to be safely read during a test
run.
CC: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
Factor out from TestrunResult.parseFile the code that checks if the file
is in the old format. Place that code into a separate method,
__checkFileIsNotInOldFormat.
CC: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
When a test run is interrupted, perhaps by a system crash, we often want
the test results. To accomplish this, Piglit must write each test result
to the result file as the test completes.
If the test run is interrupted, the result file will be corrupt. This is
corrected in a subsequent commit.
CC: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
JSONWriter writes to a JSON file stream.
It will be used by a subsequent commit to write each test result to the
result file as the test completes.
CC: Ian Romanick <ian.d.romanick@intel.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
If TestrunResult.parseFile is passed a file in the old, pre-json format,
then raise ResultFileInOldFormatError.
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
The results file produced by piglit-run.py contains a serialized
TestrunResult, and the serialization format was horridly homebrew. This
commit replaces that insanity with json.
Benefits:
- Net loss of 113 lines of code (ignoring comments and empty lines).
- By using the json module in the Python standard library, serializing
and unserializing is nearly as simple as `json.dump(object, file)`
and `json.load(object, file)`.
- By using a format that is easy to manipulate, it is now simple to
extend Piglit to allow users to embed custom data into the results
file.
As a side effect, the summary file is no longer needed, so it is no longer
produced.
Reviewed-by: Paul Berry <stereotype441@gmail.com>
Signed-off-by: Chad Versace <chad@chad-versace.us>
|
|
When reading in test run results we were mistakenly not decoding
escape sequences inside arrays. This resulted in occasional extra
backslashes in lists such as "errors" and "errors_ignored".
Reviewed-by: Chad Versace <chad@chad-versace.us>
|
|
Signed-off-by: Chad Versace <chad.versace@intel.com>
|
|
To run Piglit with an out-of-tree build, set the environment variable
PIGLIT_BUILD_DIR. For example:
$ env PIGLIT_BUILD_DIR=/path/to/piglit/build/dir \
./piglit-run.py tests/sanity.tests results/sanity.results
Signed-off-by: Chad Versace <chad.versace@intel.com>
|
|
Replaced poolName in the Test constructor with runConcurrent
boolean option. runConcurrent defaults to False. When True, the
Test pushes its doRunWork to the ConcurrentTestPool to be executed
in another thread. When False, doRunWork is executed immediately
on the calling thread (main thread).
Reviewed-by: Chad Versace <chad.versace@intel.com>
|
|
This is an intermediate change that moves the "base" (or gpu)
tests to the main thread and the other (cpu-only) tests to the
ConcurrentTestPool threads. This restores ctrl-c behavior for
gpu tests.
Reviewed-by: Chad Versace <chad.versace@intel.com>
|
|
Add SyncFileWriter class to synchronize writes to the 'main' results
file from multiple threads. This helps to ensure that writes to this
file are not intermingled.
|
|
Modify Test class to use ThreadPools in its doRun method. All
tests now execute in the default ThreadPool named 'base'.
A Test instance can be configured to run in a different (named)
ThreadPool instance by setting its poolName member variable to
that name.
|
|
Added log.py which includes a simple Logger class that wraps some
basic functions from the Python logging module. The log wrapper
simplifies setup and will accommodate thread synchronization in the
future. Test::doRun now uses the new log facility.
NOTE: this changes the format of the 'test progress' previously
printed via stdout.
Added patterns.py which includes a Singleton class design pattern
to support the Logger class. Future design patterns can be added
to this file.
Tested with Python 2.7 on Linux. All should be compatible with
Windows and Mac and most earlier widely-used versions of Python.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Chad Versace <chad.versace@intel.com>
|
|
made TestResults::write() thread-safe by writing results in one
chunk to the file so that when threaded tests are implemented
there will be no interleaving.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
|
|
This is a blacklist to complement the existing -t|--tests whitelist.
It works similarly, (accepts a regular expression and can be specified
multiple times).
|
|
This just plain looks more pleasant.
|
|
|
|
OpenSUSE 11.1 has it's lspci in /sbin/ and this isn't in PATH. Add a
try/except to catch any failure to open lspci or glxinfo.
|
|
|
|
|