You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
181 lines
7.8 KiB
181 lines
7.8 KiB
The cpucontroller testplan includes a complete set of testcases that test the
|
|
cpu controller in different scenarios.
|
|
|
|
**These testcases test the cpu controller under flat hierarchy.**
|
|
|
|
General Note:-
|
|
============
|
|
How to calculate expected cpu time:
|
|
|
|
Say if there are n groups then
|
|
|
|
total share weight = grp1 share + grp2 shares +......upto n grps
|
|
|
|
then expected time of any grp (say grp1)
|
|
=100 * (grp1 share) /(total share weight) %
|
|
|
|
(* If there were say 2 task in a group then both task might devide the time
|
|
* of this group and it would create no effect on tasks in other group. So a
|
|
* task will always get time from it's group time only and will never affect
|
|
* other groups share of time.
|
|
* Even if task is run with different nice value it will affect the tasks in
|
|
* that group only.
|
|
)
|
|
|
|
!!! User need not worry about this calculation at all as the test dynamicaly
|
|
calculates expected cpu time of a task and prints for each task in results
|
|
file. This calculation is given for refernce to interested users.
|
|
For any other information please refer to cgroup.txt in kernel documentation.
|
|
|
|
TESTCASE DESCRIPTION:
|
|
====================
|
|
|
|
Test 01: FAIRNESS TEST
|
|
---------------------
|
|
|
|
This test consists of two test cases.
|
|
The test plan for this testcase is as below:
|
|
|
|
First of all mount the cpu controller on /dev/cpuctl and create n groups.
|
|
The number of groups should be > the number of cpus for checking scheduling
|
|
fairness(as we will run 1 task per group). then we create say n groups. By
|
|
default each group is assigned 1024 shares. The cpu controller schedules the
|
|
tasks in different groups on the basis of the shares assigned to that group.
|
|
So the cpu usage of a task depends on the amount of shares it's group has out
|
|
of the total number of shares(no upper limit but a lower limit of 2) and the
|
|
number of tasks in that group(in this case only 1).
|
|
So until and unless this ratio(group A' shares/ Total shares of all groups)
|
|
changes, the cpu time for this group A remains constant.
|
|
|
|
Let us say we have 3 groups(1 task each) A,B,C each having 2, 4, 6 shares
|
|
respectively. Hence if the tasks are running infinitely they are supposed to
|
|
get 16.66%, 33.33%, 50% cpu time respectively. This test case tests that each
|
|
group should get the cpu time in the same(above) ratio irrespective of the
|
|
shares absolute values provided the ratio is not changed i.e. the cpu time per
|
|
group should not change if we change the shares from 2, 4, 6 to 200, 400, 600 or
|
|
to 20K, 40K, 60K etc (provided the working conditions do not change).
|
|
Thus the scheduling is proportional bandwidth scheduling and not absolute
|
|
bandwidth scheduling.
|
|
This was the test and outcome for test01. For test02 the setup is kept same.
|
|
Test 02 tests if the fairness persists among different runs over a period of
|
|
time. So in this test more than one sets of reading are taken and the expected
|
|
outcome is that the cpu time for a task should remain constant among all the
|
|
runs provided the working environment is same for the test.
|
|
Currently the support to create an ideal environment for all the runs is not
|
|
available in the test because of some required feature in the kernel. Hence
|
|
there may be some variations among different runs depending on the execution
|
|
of system default tasks which can run any time.
|
|
The fix for this is supposed to be merged with next release.
|
|
|
|
|
|
How to view the results:
|
|
-----------------------
|
|
|
|
The cpu time for each group(task ) is calculated in %. There are two outcomes of the test:
|
|
1. A group should get cpu time in the same ratio as it's shares.
|
|
2. This time should not change with the changes in share values while the ratio in those
|
|
values is same.
|
|
|
|
NOTE: In case 1 a variation of 1-2 % is acceptable.
|
|
(here we have 1 task per group)
|
|
|
|
Test 03: GRANULARITY TEST
|
|
-------------------------
|
|
Granularity test with respect to shares values
|
|
In this test the shares value of some of the groups is increased and for some
|
|
groups is decreased. Accordingly the expected cpu time of a task is calculated.
|
|
The outcome of the test is that calc cpu time must change in accordance with
|
|
change in shares value.
|
|
(i.e. calculated cpu time and expected cpu time should be same)
|
|
|
|
Test 04: NICE VALUE TEST
|
|
-------------------------
|
|
Renice all tasks of a group to -20 and let tasks in all other groups run with
|
|
normal priority. The aim is to test that nice effect is within the groups and
|
|
not shares are dominant over nice values.
|
|
|
|
Test 05: TASK MIGRATION TEST
|
|
----------------------------
|
|
In this test for the first run the test is run with say n tasks in m groups.
|
|
After first run a task is migrated from one group to another group. This task
|
|
now consumes the cpu time from the new group.
|
|
|
|
Test 06-08 : NUM GROUPS vs NUMBER of TASKS TEST
|
|
----------------------------------------------
|
|
In the following three testcases a total tasks are same and each task is
|
|
expected to get same cpu time. Theses testcases test the effect of creating
|
|
more groups on fairness.(however latency check will be done in future)
|
|
|
|
Test 06: N X M (N groups with M tasks each)
|
|
-------
|
|
|
|
Test 07: N*M X 1 (N*M groups with 1 task each)
|
|
-------
|
|
|
|
Test 08: 1 X N*M (1 group with N*M tasks)
|
|
-------
|
|
|
|
Test 09-10: STRESS TEST
|
|
----------
|
|
The next two testcases put stress on system and create a large number of groups
|
|
each running a large number of tasks in it.
|
|
|
|
Test 09: Heavy stress test with nice value change
|
|
-------
|
|
Creates 4 windows of different NICE values. Each window runs some n groups.
|
|
|
|
Test 10: Heavy stress test (effect of heavy group on light group)
|
|
-------
|
|
In this test one group has very less tasks while others have large number of
|
|
tasks. This tests if fairness still remains.
|
|
|
|
|
|
Test 11-12: LATENCY TESTS
|
|
----------
|
|
The latency tests refer to testing if the cpu is available(within a reasonable
|
|
amount of time) to a task which woke up after a sleep(), when the system is
|
|
having enough load.
|
|
|
|
In the following two testcases we run n(NUM_TASKS set in script) tasks as the
|
|
load tasks which simply hog the cpu by doing some sqrt calculation of a number
|
|
of type double. A task named latency check task is launched after these tasks.
|
|
This task sleeps frequently and measures the latency as the difference b/n
|
|
the actual and expected sleep durations.
|
|
|
|
In case of test 12 the tasks are running under different groups created
|
|
dynamically depending on the number of cpus in the machine.(min 2, else 1.5 *
|
|
NUM_CPUS). The tasks migrate to their groups automatically, before they start
|
|
hogging the cpu. The latency check task also runs under any of the groups.
|
|
|
|
Test 11: cpuctl latency test 1
|
|
-------
|
|
This test adds one testcase for testing the latency when the group scheduler
|
|
is not mounted, but compiled in kernel.(i.e. no task group is created)
|
|
|
|
Test 12: cpuctl latency test 2
|
|
-------
|
|
This test adds one testcase for testing the latency when the group scheduler
|
|
is mounted, and has tasks in different groups.
|
|
|
|
NOTE: There is no clear consensus on the maximum latency that the scheduler
|
|
should guarantee. Latency may vary from few milliseconds in normal desktops
|
|
to even a minute in virtaul guests. However a latency of more than 20 ms
|
|
(under normal load, as it is load dependent) is not considered good.
|
|
This test is to keep an eye on the max latency in different kernel versions
|
|
with respect to further development of the group scheduler.
|
|
|
|
The threshold for this test is calculated based on the value exported by the
|
|
kernel in /proc/sys/kernel/sched_wakeup_granularity_ns. In case the kernel is
|
|
not compiled to reflect the same, we use the same logic as by the kernel.
|
|
sysctl_sched_wakeup_granularity = (1 + ln(nr_cpus))*default_gran_logic.
|
|
For making the allowed latency to be more practical we multiply it by 2.
|
|
|
|
So even if the test may show FAIL, it may not be an actual failure.
|
|
|
|
|
|
(
|
|
In all(1-10) tests calc cpu time and exp cpu time should be same
|
|
Time:: calc:- calculated cpu time obtained for a running task
|
|
exp:- expected time as per the shares of the group and num of tasks in group.
|
|
)
|