diff mbox

[U-Boot] test/py: Add basic QSPI testing

Message ID 35bd00bddb56652007a7bac71e853f6293b2cb8e.1463493928.git.michal.simek@xilinx.com
State Rejected
Delegated to: Jagannadha Sutradharudu Teki
Headers show

Commit Message

Michal Simek May 17, 2016, 2:05 p.m. UTC
This is the first attempt how to test qspi.
Detect SPI size. Read it all, random size, erase every block,
write random data with random size and read it back,
erase the whole qspi and at the end load kernel image via tftp
and save it to memory.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
---

 test/py/tests/test_qspi.py | 134 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 134 insertions(+)
 create mode 100644 test/py/tests/test_qspi.py

Comments

Stephen Warren May 17, 2016, 5:30 p.m. UTC | #1
On 05/17/2016 08:05 AM, Michal Simek wrote:
> This is the first attempt how to test qspi.
> Detect SPI size. Read it all, random size, erase every block,
> write random data with random size and read it back,
> erase the whole qspi and at the end load kernel image via tftp
> and save it to memory.

I don't see anything that downloads a kernel image via tftp. That's good 
because I'd rather not tie SPI testing to working networking; we should 
keep tests independent as much as possible.

I think this (test filename, all symbols/strings in the patch, etc.) 
should be named "sf" not "qspi" since (a) it tests serial flash 
specifically, not SPI in general (other device types besides flash might 
exist on SPI), and (b) the test looks like it will work for any SPI 
flash, not just QSPI; IIUC the U-Boot commands work identically for 
single-, dual- and quad-wide buses.

I would rather this test were explicitly parametrized via the boardenv:

* The boardenv should specify whether sf testing is expected to occur. 
The current test attempts to detect sf, and if U-Boot can't, it just 
skips the test. I'd rather the boardenv say whether sf detection should 
work, and if it fails when boardenv says it should work, the test should 
fail rather than be skipped.

* Some devices might store useful data in their SPI flash (e.g. some 
Tegra devices boot from SPI) and we should not destroy that data in this 
test. As such, the boardenv should indicate whether the test should 
erase anything at all, and specify what part(s) of the flash can be 
erased if so (e.g. offset/size of a writable region, or a flash to 
indicate whether the entire flash erase can be tested).

* A board might have multiple sf chips attached. The boardenv could list 
which to test, whereas this test currently only tests the default 
device, I think.

> diff --git a/test/py/tests/test_qspi.py b/test/py/tests/test_qspi.py

> +# Find out qspi memory parameters
> +def qspi_pre_commands(u_boot_console):
> +    output = u_boot_console.run_command('sf probe')
> +    if not "SF: Detected" in output:
> +        pytest.skip('No QSPI device available')

I think that should be an error.

> +    m = re.search('page size (.+?) Bytes', output)
> +    if m:
> +        try:
> +            global page_size
> +            page_size = int(m.group(1))
> +        except ValueError:
> +            pytest.fail("QSPI page size not recognized")
> +
> +        print 'Page size is: ' + str(page_size) + " B"

page_size appears to be used unconditionally elsewhere. I think that 
should be written as:

m = re.search(...)
if not m:
     raise Exception(...)
try:
    global page_size
...

I wonder if it makes sense to apply further validation to these values. 
i.e. at least check the sizes aren't zero or at least some minimal size 
and alignment, so that later test code is guaranteed not to do nothing 
or fail in unexpected ways. Perhaps the boardenv should list the values 
that the HW is expected to show, so the test can validate that the probe 
command generates the expected output? I'm not totally sure on that last 
point, but maybe.

Rather than using print, perhaps use u_boot_console.log.info() so that 
the text shows up separately from any stream content in the log file?

> +    global qspi_detected
> +    qspi_detected = True

I think you can drop that. Rather, qspi_pre_commands should raise an 
exception (either error or skip) in any case where the test should not 
continue ...

> +# Read the whole QSPI flash twice, random_size till full flash size, random till page size
> +@pytest.mark.buildconfigspec('cmd_sf')
> +@pytest.mark.buildconfigspec('cmd_memory')
> +def test_qspi_read_twice(u_boot_console):
> +    qspi_pre_commands(u_boot_console)
> +
> +    if not qspi_detected:
> +        pytest.skip('QSPI not detected')

... which would allow that if condition to be removed from all test 
functions.

> +    expected_read = "Read: OK"

I'm not totally sure it's worth putting that into a string; while in 
this case it slightly reduces duplication (although in other cases in 
this file where the same style is used, there's no duplication being 
avoided), it makes it harder to see what the asserts are actually 
checking later in the code.

Simon Glass has requested that all strings use ' not ", except perhaps 
where there's an embedded ' in the string. This comment applies 
elsewhere in the file too.

> +    # TODO maybe add alignment and different start for pages
> +    for size in random.randint(4, page_size), random.randint(4, total_size), total_size:

I'm not sure it's a good idea for tests to use random numbers and hence 
operate differently each time. This can lead to intermittent failures 
that are hard to track down. Rather, if there are known corner-cases, 
the test should always test them all. Take a look at the DFU test (and 
its example boardenv content) for how that's handled there.

> +        # FIXME using 0 is failing for me
> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr + total_size, size))

"using 0" for what fails? That's confusing since the second parameter is 
"0" so I wonder if this test fails or passes for you? Perhaps that 
comment applies to the address parameter? If so, explicitly mentioning 
the parameter name would be useful to remove the ambiguity. Are you 
investigating the problem?

> +        assert expected_read in output
> +        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size, size))
> +        m = re.search('==> (.+?)', output)
> +        if not m:
> +            pytest.fail("CRC32 failed")
> +        expected_crc32 = m.group(1)
> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr + total_size + 10, size))
> +        assert expected_read in output
> +        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size + 10, size))
> +        assert expected_crc32 in output

That just proves that the SPI read wrote the same data into memory both 
times, or failed to write anything both times thus leaving the same 
pre-existing content in memory each time. It would be better to at least 
clear the memory before each sf read (via mw I imagine) to ensure that 
sf read actually wrote something to memory, and even better to compare 
against some known crc32 rather than just assuming that anything read 
from SPI is correct. The latter could be achieved by either:

a) In boardenv, specify the offset/length/crc32 of a region to be 
verified by read. This would make sense if at least part of the flash 
content is invariant across test runs.

b) Do a write-then-read test of known data. Something like:

1) erase the region
2) read the region, validate crc mismatches (i.e. the erase worked)
3) write the region
4) read the region, validate crc matches written data

Make sure that in steps 2 and 4, the memory buffer used is cleared 
before the sf read to guarantee that sf read actually read the data, 
rather than simply leaving existing RAM content in place.

> +@pytest.mark.buildconfigspec('cmd_sf')
> +def test_qspi_erase_all(u_boot_console):
> +    qspi_pre_commands(u_boot_console)
> +
> +    if not qspi_detected:
> +        pytest.skip('QSPI not detected')
> +
> +    expected_erase = "Erased: OK"
> +    start = 0
> +    output = u_boot_console.run_command('sf erase 0 ' + str(hex(total_size)))
> +    assert expected_erase in output

That should probably read the region and validate that the erase 
actually happened?
Michal Simek May 17, 2016, 5:55 p.m. UTC | #2
On 17.5.2016 19:30, Stephen Warren wrote:
> On 05/17/2016 08:05 AM, Michal Simek wrote:
>> This is the first attempt how to test qspi.
>> Detect SPI size. Read it all, random size, erase every block,
>> write random data with random size and read it back,
>> erase the whole qspi and at the end load kernel image via tftp
>> and save it to memory.
> 
> I don't see anything that downloads a kernel image via tftp. That's good
> because I'd rather not tie SPI testing to working networking; we should
> keep tests independent as much as possible.

Ah yeah. I have put there but remove it before sending this out.

> 
> I think this (test filename, all symbols/strings in the patch, etc.)
> should be named "sf" not "qspi" since (a) it tests serial flash
> specifically, not SPI in general (other device types besides flash might
> exist on SPI), and (b) the test looks like it will work for any SPI
> flash, not just QSPI; IIUC the U-Boot commands work identically for
> single-, dual- and quad-wide buses.

ok. No problem to change it.


> 
> I would rather this test were explicitly parametrized via the boardenv:
> 
> * The boardenv should specify whether sf testing is expected to occur.
> The current test attempts to detect sf, and if U-Boot can't, it just
> skips the test. I'd rather the boardenv say whether sf detection should
> work, and if it fails when boardenv says it should work, the test should
> fail rather than be skipped.

ok. Good idea. It can be added as separate patch on the top of this.


> 
> * Some devices might store useful data in their SPI flash (e.g. some
> Tegra devices boot from SPI) and we should not destroy that data in this
> test. As such, the boardenv should indicate whether the test should
> erase anything at all, and specify what part(s) of the flash can be
> erased if so (e.g. offset/size of a writable region, or a flash to
> indicate whether the entire flash erase can be tested).
> 

Ok Good idea - add protection. Is this just space from start?
Or does it make sense to design it in more generic way - start-end
regions which you can't rewrite?

> * A board might have multiple sf chips attached. The boardenv could list
> which to test, whereas this test currently only tests the default
> device, I think.

It can be like that. The question is if make sense to add all of these
features to this particular patch or just create patches on the top of
this one.

> 
>> diff --git a/test/py/tests/test_qspi.py b/test/py/tests/test_qspi.py
> 
>> +# Find out qspi memory parameters
>> +def qspi_pre_commands(u_boot_console):
>> +    output = u_boot_console.run_command('sf probe')
>> +    if not "SF: Detected" in output:
>> +        pytest.skip('No QSPI device available')
> 
> I think that should be an error.
> 
>> +    m = re.search('page size (.+?) Bytes', output)
>> +    if m:
>> +        try:
>> +            global page_size
>> +            page_size = int(m.group(1))
>> +        except ValueError:
>> +            pytest.fail("QSPI page size not recognized")
>> +
>> +        print 'Page size is: ' + str(page_size) + " B"
> 
> page_size appears to be used unconditionally elsewhere. I think that
> should be written as:
> 
> m = re.search(...)
> if not m:
>     raise Exception(...)
> try:
>    global page_size
> ...

ok.

> 
> I wonder if it makes sense to apply further validation to these values.
> i.e. at least check the sizes aren't zero or at least some minimal size
> and alignment, so that later test code is guaranteed not to do nothing
> or fail in unexpected ways. Perhaps the boardenv should list the values
> that the HW is expected to show, so the test can validate that the probe
> command generates the expected output? I'm not totally sure on that last
> point, but maybe.
> 
> Rather than using print, perhaps use u_boot_console.log.info() so that
> the text shows up separately from any stream content in the log file?

ok. Just keep in your mind that I am not python expert.
I did use print for my debugging purpose. :-)

> 
>> +    global qspi_detected
>> +    qspi_detected = True
> 
> I think you can drop that. Rather, qspi_pre_commands should raise an
> exception (either error or skip) in any case where the test should not
> continue ...

ok


> 
>> +# Read the whole QSPI flash twice, random_size till full flash size,
>> random till page size
>> +@pytest.mark.buildconfigspec('cmd_sf')
>> +@pytest.mark.buildconfigspec('cmd_memory')
>> +def test_qspi_read_twice(u_boot_console):
>> +    qspi_pre_commands(u_boot_console)
>> +
>> +    if not qspi_detected:
>> +        pytest.skip('QSPI not detected')
> 
> ... which would allow that if condition to be removed from all test
> functions.

ok.

> 
>> +    expected_read = "Read: OK"
> 
> I'm not totally sure it's worth putting that into a string; while in
> this case it slightly reduces duplication (although in other cases in
> this file where the same style is used, there's no duplication being
> avoided), it makes it harder to see what the asserts are actually
> checking later in the code.

ok will fix.


> 
> Simon Glass has requested that all strings use ' not ", except perhaps
> where there's an embedded ' in the string. This comment applies
> elsewhere in the file too.


What do you mean by this?

> 
>> +    # TODO maybe add alignment and different start for pages
>> +    for size in random.randint(4, page_size), random.randint(4,
>> total_size), total_size:
> 
> I'm not sure it's a good idea for tests to use random numbers and hence
> operate differently each time. This can lead to intermittent failures
> that are hard to track down. Rather, if there are known corner-cases,
> the test should always test them all. Take a look at the DFU test (and
> its example boardenv content) for how that's handled there.


I choose to use random sizes to have more variants because it can
discover problem.
I am not saying that there shouldn't be tests for fixed sizes but I
still think that some sort of randomization is good for testing because
it can discover errors which you didn't cover.


>> +        # FIXME using 0 is failing for me
>> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>> + total_size, size))
> 
> "using 0" for what fails? That's confusing since the second parameter is
> "0" so I wonder if this test fails or passes for you? Perhaps that
> comment applies to the address parameter? If so, explicitly mentioning
> the parameter name would be useful to remove the ambiguity. Are you
> investigating the problem?

size 0 was causing problem. That's why it is starting from 4. Not sure
what's the standard sf behavior if you want to save 0 size.

> 
>> +        assert expected_read in output
>> +        output = u_boot_console.run_command('crc32 %x %x' % (addr +
>> total_size, size))
>> +        m = re.search('==> (.+?)', output)
>> +        if not m:
>> +            pytest.fail("CRC32 failed")
>> +        expected_crc32 = m.group(1)
>> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>> + total_size + 10, size))
>> +        assert expected_read in output
>> +        output = u_boot_console.run_command('crc32 %x %x' % (addr +
>> total_size + 10, size))
>> +        assert expected_crc32 in output
> 
> That just proves that the SPI read wrote the same data into memory both
> times, or failed to write anything both times thus leaving the same
> pre-existing content in memory each time. It would be better to at least
> clear the memory before each sf read (via mw I imagine) to ensure that
> sf read actually wrote something to memory, and even better to compare
> against some known crc32 rather than just assuming that anything read
> from SPI is correct. The latter could be achieved by either:

That's why there is offset 10. It means it is not rewriting the same data.

> 
> a) In boardenv, specify the offset/length/crc32 of a region to be
> verified by read. This would make sense if at least part of the flash
> content is invariant across test runs.

What should be transfer mechanism to get data to the board to be able to
say it has this crc? For ethernet this is easy for qspi not sure.

> 
> b) Do a write-then-read test of known data. Something like:
> 
> 1) erase the region
> 2) read the region, validate crc mismatches (i.e. the erase worked)

This is good one. I should check this for sure that flash was erased.

> 3) write the region
> 4) read the region, validate crc matches written data
> 
> Make sure that in steps 2 and 4, the memory buffer used is cleared
> before the sf read to guarantee that sf read actually read the data,
> rather than simply leaving existing RAM content in place.

Is this really needed because I do use small offset that data is not
saved to the same address space.

> 
>> +@pytest.mark.buildconfigspec('cmd_sf')
>> +def test_qspi_erase_all(u_boot_console):
>> +    qspi_pre_commands(u_boot_console)
>> +
>> +    if not qspi_detected:
>> +        pytest.skip('QSPI not detected')
>> +
>> +    expected_erase = "Erased: OK"
>> +    start = 0
>> +    output = u_boot_console.run_command('sf erase 0 ' +
>> str(hex(total_size)))
>> +    assert expected_erase in output
> 
> That should probably read the region and validate that the erase
> actually happened?

Agree. Read back and check should be there.
Is using mw addr val size, crc over it, erase, read back, crc, compare
good for you?

Thanks,
Michal
Stephen Warren May 17, 2016, 7:27 p.m. UTC | #3
On 05/17/2016 11:55 AM, Michal Simek wrote:
> On 17.5.2016 19:30, Stephen Warren wrote:
>> On 05/17/2016 08:05 AM, Michal Simek wrote:
>>> This is the first attempt how to test qspi.
>>> Detect SPI size. Read it all, random size, erase every block,
>>> write random data with random size and read it back,
>>> erase the whole qspi and at the end load kernel image via tftp
>>> and save it to memory.
...
>> I would rather this test were explicitly parametrized via the boardenv:
>>
>> * The boardenv should specify whether sf testing is expected to occur.
>> The current test attempts to detect sf, and if U-Boot can't, it just
>> skips the test. I'd rather the boardenv say whether sf detection should
>> work, and if it fails when boardenv says it should work, the test should
>> fail rather than be skipped.
>
> ok. Good idea. It can be added as separate patch on the top of this.

My naive expectation is that the patch would change so much that you may 
as well fold it all into a revised patch.

>> * Some devices might store useful data in their SPI flash (e.g. some
>> Tegra devices boot from SPI) and we should not destroy that data in this
>> test. As such, the boardenv should indicate whether the test should
>> erase anything at all, and specify what part(s) of the flash can be
>> erased if so (e.g. offset/size of a writable region, or a flash to
>> indicate whether the entire flash erase can be tested).
>>
>
> Ok Good idea - add protection. Is this just space from start?
> Or does it make sense to design it in more generic way - start-end
> regions which you can't rewrite?

It's probably best to have the boardenv explicitly list the region(s) 
which can be written, rather than acting as an exclusion list. That way 
it's directly specified where writes will occur. I'd suggest an 
offset/length for the/each writeable region.

Perhaps something like:

env__sf_tests = (
     {
         "id": "0", # "sf probe" parameter
         "offset": 0x1000,
         "size": 0x1000,
	"writeable": True, # Trigger erase/write/read test
     },
     {
	"id": "1:2", # "sf probe" parameter
	"offset": 0x10000,
	"size": 0x1000,
         # Lack of writeable:True triggers just a read test
         # The following is then used to validate the read data
	"read_crc32": 0x1234,
     },
}
?

>> Simon Glass has requested that all strings use ' not ", except perhaps
>> where there's an embedded ' in the string. This comment applies
>> elsewhere in the file too.
>
> What do you mean by this?

Python allows you to use ' or " to delimit strings; 'abc' and "abc" mean 
the same thing. Simon requested we standardize on using ' not ".

>>> +    # TODO maybe add alignment and different start for pages
>>> +    for size in random.randint(4, page_size), random.randint(4,
>>> total_size), total_size:
>>
>> I'm not sure it's a good idea for tests to use random numbers and hence
>> operate differently each time. This can lead to intermittent failures
>> that are hard to track down. Rather, if there are known corner-cases,
>> the test should always test them all. Take a look at the DFU test (and
>> its example boardenv content) for how that's handled there.
>
> I choose to use random sizes to have more variants because it can
> discover problem.
> I am not saying that there shouldn't be tests for fixed sizes but I
> still think that some sort of randomization is good for testing because
> it can discover errors which you didn't cover.

Yes, randomization can discover additional problems. This means that 
repeatedly running the same test on the same SW/HW can see different 
results. Personally, I'd rather avoid such intermittency, and use 
test/py as a regression test that covers an explicit set of conditions 
that are known to work.

I think we can achieve both our desires by having the test code NOT 
perform any randomization, having the code take the list of tests to 
perform from boardenv (via an example data structure similar to what I 
quoted above), and since the boardenv is itself Python code, I can write 
"0x1000" for the test size, whereas you could import the random module 
into your boardenv, and write "random.randint(4, 0x1000)" in one entry 
in the list there, "random.randint(4, 0x100000)" in another, and so on.

>>> +        # FIXME using 0 is failing for me
>>> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>>> + total_size, size))
>>
>> "using 0" for what fails? That's confusing since the second parameter is
>> "0" so I wonder if this test fails or passes for you? Perhaps that
>> comment applies to the address parameter? If so, explicitly mentioning
>> the parameter name would be useful to remove the ambiguity. Are you
>> investigating the problem?
>
> size 0 was causing problem. That's why it is starting from 4. Not sure
> what's the standard sf behavior if you want to save 0 size.

Ah. I would expect a size of 0 to be rejected, since it implies 
reading/writing nothing. It sounds like that comment applies more to the 
randint() calls in the for loop rather than the run_command() function then?

>>> +        assert expected_read in output
>>> +        output = u_boot_console.run_command('crc32 %x %x' % (addr +
>>> total_size, size))
>>> +        m = re.search('==> (.+?)', output)
>>> +        if not m:
>>> +            pytest.fail("CRC32 failed")
>>> +        expected_crc32 = m.group(1)
>>> +        output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>>> + total_size + 10, size))
>>> +        assert expected_read in output
>>> +        output = u_boot_console.run_command('crc32 %x %x' % (addr +
>>> total_size + 10, size))
>>> +        assert expected_crc32 in output
>>
>> That just proves that the SPI read wrote the same data into memory both
>> times, or failed to write anything both times thus leaving the same
>> pre-existing content in memory each time. It would be better to at least
>> clear the memory before each sf read (via mw I imagine) to ensure that
>> sf read actually wrote something to memory, and even better to compare
>> against some known crc32 rather than just assuming that anything read
>> from SPI is correct. The latter could be achieved by either:
>
> That's why there is offset 10. It means it is not rewriting the same data.

Ah right, I hadn't noticed that part. Still, I'd suggest just clearing 
the memory before the read would be just as good, plus it would validate 
the entire written data.

>> a) In boardenv, specify the offset/length/crc32 of a region to be
>> verified by read. This would make sense if at least part of the flash
>> content is invariant across test runs.
>
> What should be transfer mechanism to get data to the board to be able to
> say it has this crc? For ethernet this is easy for qspi not sure.

For a read-only test of flash content, I would expect the user to 
pre-configure the flash content before running test/py. This would be 
the equivalent of ensuring that a TFTP server's data directory contained 
the filename that the test attempts to download.

>> b) Do a write-then-read test of known data. Something like:
>>
>> 1) erase the region
>> 2) read the region, validate crc mismatches (i.e. the erase worked)
>
> This is good one. I should check this for sure that flash was erased.
>
>> 3) write the region
>> 4) read the region, validate crc matches written data
>>
>> Make sure that in steps 2 and 4, the memory buffer used is cleared
>> before the sf read to guarantee that sf read actually read the data,
>> rather than simply leaving existing RAM content in place.
>
> Is this really needed because I do use small offset that data is not
> saved to the same address space.

Perhaps not, but with the erase, you can validate the entire data. That 
said, I suppose it's fairly unlikely a bug could exist that caused a 
problem only in the first 10 bytes, so perhaps it's fine as-is.

>>> +@pytest.mark.buildconfigspec('cmd_sf')
>>> +def test_qspi_erase_all(u_boot_console):
>>> +    qspi_pre_commands(u_boot_console)
>>> +
>>> +    if not qspi_detected:
>>> +        pytest.skip('QSPI not detected')
>>> +
>>> +    expected_erase = "Erased: OK"
>>> +    start = 0
>>> +    output = u_boot_console.run_command('sf erase 0 ' + str(hex(total_size)))
>>> +    assert expected_erase in output
>>
>> That should probably read the region and validate that the erase
>> actually happened?
>
> Agree. Read back and check should be there.
> Is using mw addr val size, crc over it, erase, read back, crc, compare
> good for you?

I don't think you need to CRC after the mw, since the mw only exists to 
ensure that the memory content doesn't match what you expect to read, 
and you know exactly what you expect to read (all 0xff) so can mw 
something else. So, I think an erase test would be:

mw addr val size # val anything other than 0xff
erase
read back to addr
CRC addr size
Calculate CRC of that many 0xff's in Python only
Compare the two CRCs.
diff mbox

Patch

diff --git a/test/py/tests/test_qspi.py b/test/py/tests/test_qspi.py
new file mode 100644
index 000000000000..f8dea6de83da
--- /dev/null
+++ b/test/py/tests/test_qspi.py
@@ -0,0 +1,134 @@ 
+# Copyright (c) 2016, Xilinx Inc. Michal Simek
+#
+# SPDX-License-Identifier: GPL-2.0
+
+import pytest
+import re
+import random
+import u_boot_utils
+
+qspi_detected = False
+page_size = 0
+erase_size = 0
+total_size = 0
+
+# Find out qspi memory parameters
+def qspi_pre_commands(u_boot_console):
+    output = u_boot_console.run_command('sf probe')
+    if not "SF: Detected" in output:
+        pytest.skip('No QSPI device available')
+
+    m = re.search('page size (.+?) Bytes', output)
+    if m:
+        try:
+            global page_size
+            page_size = int(m.group(1))
+        except ValueError:
+            pytest.fail("QSPI page size not recognized")
+
+        print 'Page size is: ' + str(page_size) + " B"
+
+    m = re.search('erase size (.+?) KiB', output)
+    if m:
+        try:
+           global erase_size
+           erase_size = int(m.group(1))
+        except ValueError:
+           pytest.fail("QSPI erase size not recognized")
+
+        erase_size *= 1024
+        print 'Erase size is: ' + str(erase_size) + " B"
+
+    m = re.search('total (.+?) MiB', output)
+    if m:
+        try:
+            global total_size
+            total_size = int(m.group(1))
+        except ValueError:
+            pytest.fail("QSPI total size not recognized")
+
+        total_size *= 1024 * 1024
+        print 'Total size is: ' + str(total_size) + " B"
+
+    global qspi_detected
+    qspi_detected = True
+
+# Read the whole QSPI flash twice, random_size till full flash size, random till page size
+@pytest.mark.buildconfigspec('cmd_sf')
+@pytest.mark.buildconfigspec('cmd_memory')
+def test_qspi_read_twice(u_boot_console):
+    qspi_pre_commands(u_boot_console)
+
+    if not qspi_detected:
+        pytest.skip('QSPI not detected')
+
+    expected_read = "Read: OK"
+
+    # TODO maybe add alignment and different start for pages
+    for size in random.randint(4, page_size), random.randint(4, total_size), total_size:
+        addr = u_boot_utils.find_ram_base(u_boot_console)
+        # FIXME using 0 is failing for me
+        output = u_boot_console.run_command('sf read %x 0 %x' % (addr + total_size, size))
+        assert expected_read in output
+        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size, size))
+        m = re.search('==> (.+?)', output)
+        if not m:
+            pytest.fail("CRC32 failed")
+        expected_crc32 = m.group(1)
+        output = u_boot_console.run_command('sf read %x 0 %x' % (addr + total_size + 10, size))
+        assert expected_read in output
+        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size + 10, size))
+        assert expected_crc32 in output
+
+# This test check crossing boundary for dual/parralel configurations
+@pytest.mark.buildconfigspec('cmd_sf')
+def test_qspi_erase_block(u_boot_console):
+    qspi_pre_commands(u_boot_console)
+
+    if not qspi_detected:
+        pytest.skip('QSPI not detected')
+
+    expected_erase = "Erased: OK"
+    for start in range(0, total_size, erase_size):
+        output = u_boot_console.run_command('sf erase %x %x' % (start, erase_size))
+        assert expected_erase in output
+
+# Random write till page size, random till size and full size
+@pytest.mark.buildconfigspec('cmd_sf')
+@pytest.mark.buildconfigspec('cmd_memory')
+def test_qspi_write_twice(u_boot_console):
+    qspi_pre_commands(u_boot_console)
+
+    if not qspi_detected:
+        pytest.skip('QSPI not detected')
+
+    expected_write = "Written: OK"
+    expected_read = "Read: OK"
+
+    # TODO maybe add alignment and different start for pages
+    for size in random.randint(4, page_size), random.randint(page_size, total_size), total_size:
+        addr = u_boot_utils.find_ram_base(u_boot_console)
+        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size, size))
+        m = re.search('==> (.+?)', output)
+        if not m:
+            pytest.fail("CRC32 failed")
+        expected_crc32 = m.group(1)
+
+        output = u_boot_console.run_command('sf write %x 0 %x' % (addr + total_size, size))
+        assert expected_write in output
+        output = u_boot_console.run_command('sf read %x 0 %x' % (addr + total_size + 10, size))
+        assert expected_read in output
+        output = u_boot_console.run_command('crc32 %x %x' % (addr + total_size + 10, size))
+        assert expected_crc32 in output
+
+@pytest.mark.buildconfigspec('cmd_sf')
+def test_qspi_erase_all(u_boot_console):
+    qspi_pre_commands(u_boot_console)
+
+    if not qspi_detected:
+        pytest.skip('QSPI not detected')
+
+    expected_erase = "Erased: OK"
+    start = 0
+    output = u_boot_console.run_command('sf erase 0 ' + str(hex(total_size)))
+    assert expected_erase in output