diff mbox series

[U-Boot,v4,04/16] sandbox: smbios: Update to support sandbox

Message ID 20180516154233.21457-5-sjg@chromium.org
State Superseded
Delegated to: Alexander Graf
Headers show
Series efi: Enable basic sandbox support for EFI loader | expand

Commit Message

Simon Glass May 16, 2018, 3:42 p.m. UTC
At present this code casts addresses to pointers so cannot be used with
sandbox. Update it to use mapmem instead.

Signed-off-by: Simon Glass <sjg@chromium.org>
---

Changes in v4: None
Changes in v3:
- Drop incorrect map_sysmem() in write_smbios_table()

Changes in v2: None

 lib/smbios.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

Comments

Alexander Graf May 24, 2018, 12:24 p.m. UTC | #1
On 16.05.18 17:42, Simon Glass wrote:
> At present this code casts addresses to pointers so cannot be used with
> sandbox. Update it to use mapmem instead.
> 
> Signed-off-by: Simon Glass <sjg@chromium.org>

I really dislike the whole fact that you have to call map_sysmem() at
all. I don't quite understand the whole point of it either - it just
seems to clutter the code and make it harder to follow.

Can't we just simply make sandbox behave like any other target instead?


Alex
Simon Glass May 25, 2018, 2:42 a.m. UTC | #2
Hi Alex,

On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 16.05.18 17:42, Simon Glass wrote:
>> At present this code casts addresses to pointers so cannot be used with
>> sandbox. Update it to use mapmem instead.
>>
>> Signed-off-by: Simon Glass <sjg@chromium.org>
>
> I really dislike the whole fact that you have to call map_sysmem() at
> all. I don't quite understand the whole point of it either - it just
> seems to clutter the code and make it harder to follow.

The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).

Otherwise we cannot write tests which use particular addresses, and
people have to worry about the host memory layout when using sandbox.

>
> Can't we just simply make sandbox behave like any other target instead?

Actually that's the goal of the sandbox support. Memory is modelled as
a contiguous chunk starting at 0x0, regardless of what the OS actually
gives U-Boot in terms of addresses.

Regards,
Simon
Alexander Graf June 3, 2018, 12:13 p.m. UTC | #3
On 25.05.18 04:42, Simon Glass wrote:
> Hi Alex,
> 
> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>
>>
>> On 16.05.18 17:42, Simon Glass wrote:
>>> At present this code casts addresses to pointers so cannot be used with
>>> sandbox. Update it to use mapmem instead.
>>>
>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>
>> I really dislike the whole fact that you have to call map_sysmem() at
>> all. I don't quite understand the whole point of it either - it just
>> seems to clutter the code and make it harder to follow.
> 
> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
> 
> Otherwise we cannot write tests which use particular addresses, and
> people have to worry about the host memory layout when using sandbox.

Not if we write a smart enough linker script. I can try to see when I
get around to give you an example. But basically all we need to do is
reserve a section for guest ram at a constant virtual address.

>> Can't we just simply make sandbox behave like any other target instead?
> 
> Actually that's the goal of the sandbox support. Memory is modelled as
> a contiguous chunk starting at 0x0, regardless of what the OS actually
> gives U-Boot in terms of addresses.

Most platforms don't have RAM start at 0x0 (and making sure nobody
assumes it does start at 0 is a good thing). The only bit we need to
make sure is that it always starts at *the same* address on every
invocation. But if that address is 256MB, things should still be fine.


Alex
Simon Glass June 7, 2018, 8:25 p.m. UTC | #4
Hi Alex,

On 3 June 2018 at 04:13, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 25.05.18 04:42, Simon Glass wrote:
>> Hi Alex,
>>
>> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>>
>>>
>>> On 16.05.18 17:42, Simon Glass wrote:
>>>> At present this code casts addresses to pointers so cannot be used with
>>>> sandbox. Update it to use mapmem instead.
>>>>
>>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>>
>>> I really dislike the whole fact that you have to call map_sysmem() at
>>> all. I don't quite understand the whole point of it either - it just
>>> seems to clutter the code and make it harder to follow.
>>
>> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
>> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
>>
>> Otherwise we cannot write tests which use particular addresses, and
>> people have to worry about the host memory layout when using sandbox.
>
> Not if we write a smart enough linker script. I can try to see when I
> get around to give you an example. But basically all we need to do is
> reserve a section for guest ram at a constant virtual address.

Yes, but ideally that would be 0, or something small.

>
>>> Can't we just simply make sandbox behave like any other target instead?
>>
>> Actually that's the goal of the sandbox support. Memory is modelled as
>> a contiguous chunk starting at 0x0, regardless of what the OS actually
>> gives U-Boot in terms of addresses.
>
> Most platforms don't have RAM start at 0x0 (and making sure nobody
> assumes it does start at 0 is a good thing). The only bit we need to
> make sure is that it always starts at *the same* address on every
> invocation. But if that address is 256MB, things should still be fine.

Yes but putting a 10000000 base address on everything is a bit of a pain.

Regards,
Simon
Alexander Graf June 7, 2018, 8:36 p.m. UTC | #5
On 07.06.18 22:25, Simon Glass wrote:
> Hi Alex,
> 
> On 3 June 2018 at 04:13, Alexander Graf <agraf@suse.de> wrote:
>>
>>
>> On 25.05.18 04:42, Simon Glass wrote:
>>> Hi Alex,
>>>
>>> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>>>
>>>>
>>>> On 16.05.18 17:42, Simon Glass wrote:
>>>>> At present this code casts addresses to pointers so cannot be used with
>>>>> sandbox. Update it to use mapmem instead.
>>>>>
>>>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>>>
>>>> I really dislike the whole fact that you have to call map_sysmem() at
>>>> all. I don't quite understand the whole point of it either - it just
>>>> seems to clutter the code and make it harder to follow.
>>>
>>> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
>>> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
>>>
>>> Otherwise we cannot write tests which use particular addresses, and
>>> people have to worry about the host memory layout when using sandbox.
>>
>> Not if we write a smart enough linker script. I can try to see when I
>> get around to give you an example. But basically all we need to do is
>> reserve a section for guest ram at a constant virtual address.
> 
> Yes, but ideally that would be 0, or something small.

You can't do 0 because 0 is protected on a good number of OSs. And if it
can't be 0, better use something that makes pointers easy to read.

> 
>>
>>>> Can't we just simply make sandbox behave like any other target instead?
>>>
>>> Actually that's the goal of the sandbox support. Memory is modelled as
>>> a contiguous chunk starting at 0x0, regardless of what the OS actually
>>> gives U-Boot in terms of addresses.
>>
>> Most platforms don't have RAM start at 0x0 (and making sure nobody
>> assumes it does start at 0 is a good thing). The only bit we need to
>> make sure is that it always starts at *the same* address on every
>> invocation. But if that address is 256MB, things should still be fine.
> 
> Yes but putting a 10000000 base address on everything is a bit of a pain.

Why? It's what we do on arm systems that have ram starting at higher
offsets already.

Alex
Simon Glass June 7, 2018, 8:41 p.m. UTC | #6
Hi Alex,

On 7 June 2018 at 12:36, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 07.06.18 22:25, Simon Glass wrote:
>> Hi Alex,
>>
>> On 3 June 2018 at 04:13, Alexander Graf <agraf@suse.de> wrote:
>>>
>>>
>>> On 25.05.18 04:42, Simon Glass wrote:
>>>> Hi Alex,
>>>>
>>>> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>>>>
>>>>>
>>>>> On 16.05.18 17:42, Simon Glass wrote:
>>>>>> At present this code casts addresses to pointers so cannot be used with
>>>>>> sandbox. Update it to use mapmem instead.
>>>>>>
>>>>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>>>>
>>>>> I really dislike the whole fact that you have to call map_sysmem() at
>>>>> all. I don't quite understand the whole point of it either - it just
>>>>> seems to clutter the code and make it harder to follow.
>>>>
>>>> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
>>>> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
>>>>
>>>> Otherwise we cannot write tests which use particular addresses, and
>>>> people have to worry about the host memory layout when using sandbox.
>>>
>>> Not if we write a smart enough linker script. I can try to see when I
>>> get around to give you an example. But basically all we need to do is
>>> reserve a section for guest ram at a constant virtual address.
>>
>> Yes, but ideally that would be 0, or something small.
>
> You can't do 0 because 0 is protected on a good number of OSs. And if it
> can't be 0, better use something that makes pointers easy to read.

Yes this is one reason for map_sysmem().

>
>>
>>>
>>>>> Can't we just simply make sandbox behave like any other target instead?
>>>>
>>>> Actually that's the goal of the sandbox support. Memory is modelled as
>>>> a contiguous chunk starting at 0x0, regardless of what the OS actually
>>>> gives U-Boot in terms of addresses.
>>>
>>> Most platforms don't have RAM start at 0x0 (and making sure nobody
>>> assumes it does start at 0 is a good thing). The only bit we need to
>>> make sure is that it always starts at *the same* address on every
>>> invocation. But if that address is 256MB, things should still be fine.
>>
>> Yes but putting a 10000000 base address on everything is a bit of a pain.
>
> Why? It's what we do on arm systems that have ram starting at higher
> offsets already.

It's a pain because you have to type 1 and 5-6 zeroes before you can
get to the address you want. Otherwise sandbox just segfaults, which
is annoying.

Regards,
Simon
Alexander Graf June 7, 2018, 8:47 p.m. UTC | #7
On 07.06.18 22:41, Simon Glass wrote:
> Hi Alex,
> 
> On 7 June 2018 at 12:36, Alexander Graf <agraf@suse.de> wrote:
>>
>>
>> On 07.06.18 22:25, Simon Glass wrote:
>>> Hi Alex,
>>>
>>> On 3 June 2018 at 04:13, Alexander Graf <agraf@suse.de> wrote:
>>>>
>>>>
>>>> On 25.05.18 04:42, Simon Glass wrote:
>>>>> Hi Alex,
>>>>>
>>>>> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>>>>>
>>>>>>
>>>>>> On 16.05.18 17:42, Simon Glass wrote:
>>>>>>> At present this code casts addresses to pointers so cannot be used with
>>>>>>> sandbox. Update it to use mapmem instead.
>>>>>>>
>>>>>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>>>>>
>>>>>> I really dislike the whole fact that you have to call map_sysmem() at
>>>>>> all. I don't quite understand the whole point of it either - it just
>>>>>> seems to clutter the code and make it harder to follow.
>>>>>
>>>>> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
>>>>> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
>>>>>
>>>>> Otherwise we cannot write tests which use particular addresses, and
>>>>> people have to worry about the host memory layout when using sandbox.
>>>>
>>>> Not if we write a smart enough linker script. I can try to see when I
>>>> get around to give you an example. But basically all we need to do is
>>>> reserve a section for guest ram at a constant virtual address.
>>>
>>> Yes, but ideally that would be 0, or something small.
>>
>> You can't do 0 because 0 is protected on a good number of OSs. And if it
>> can't be 0, better use something that makes pointers easy to read.
> 
> Yes this is one reason for map_sysmem().
> 
>>
>>>
>>>>
>>>>>> Can't we just simply make sandbox behave like any other target instead?
>>>>>
>>>>> Actually that's the goal of the sandbox support. Memory is modelled as
>>>>> a contiguous chunk starting at 0x0, regardless of what the OS actually
>>>>> gives U-Boot in terms of addresses.
>>>>
>>>> Most platforms don't have RAM start at 0x0 (and making sure nobody
>>>> assumes it does start at 0 is a good thing). The only bit we need to
>>>> make sure is that it always starts at *the same* address on every
>>>> invocation. But if that address is 256MB, things should still be fine.
>>>
>>> Yes but putting a 10000000 base address on everything is a bit of a pain.
>>
>> Why? It's what we do on arm systems that have ram starting at higher
>> offsets already.
> 
> It's a pain because you have to type 1 and 5-6 zeroes before you can
> get to the address you want. Otherwise sandbox just segfaults, which
> is annoying.

It's the same as any other device that does not have RAM starting at 0.
The benefit of it is that you manage to catch NULL pointer accesses
quite easily, which I guess is something you'll want from a testing target.

Also, you shouldn't use hard addresses in the first place. That's why we
have $kernel_addr_r and friends. As long as you stick to those, nothing
should change for you at all.


Alex
Simon Glass June 8, 2018, 9:59 p.m. UTC | #8
Hi Alex,

On 7 June 2018 at 12:47, Alexander Graf <agraf@suse.de> wrote:
>
>
> On 07.06.18 22:41, Simon Glass wrote:
>> Hi Alex,
>>
>> On 7 June 2018 at 12:36, Alexander Graf <agraf@suse.de> wrote:
>>>
>>>
>>> On 07.06.18 22:25, Simon Glass wrote:
>>>> Hi Alex,
>>>>
>>>> On 3 June 2018 at 04:13, Alexander Graf <agraf@suse.de> wrote:
>>>>>
>>>>>
>>>>> On 25.05.18 04:42, Simon Glass wrote:
>>>>>> Hi Alex,
>>>>>>
>>>>>> On 24 May 2018 at 06:24, Alexander Graf <agraf@suse.de> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 16.05.18 17:42, Simon Glass wrote:
>>>>>>>> At present this code casts addresses to pointers so cannot be used with
>>>>>>>> sandbox. Update it to use mapmem instead.
>>>>>>>>
>>>>>>>> Signed-off-by: Simon Glass <sjg@chromium.org>
>>>>>>>
>>>>>>> I really dislike the whole fact that you have to call map_sysmem() at
>>>>>>> all. I don't quite understand the whole point of it either - it just
>>>>>>> seems to clutter the code and make it harder to follow.
>>>>>>
>>>>>> The purpose is to map U-Boot addresses (e.g. 0x1234) to actual
>>>>>> user-space addresses in sandbox (gd->arch.ram_buf + 0x1234).
>>>>>>
>>>>>> Otherwise we cannot write tests which use particular addresses, and
>>>>>> people have to worry about the host memory layout when using sandbox.
>>>>>
>>>>> Not if we write a smart enough linker script. I can try to see when I
>>>>> get around to give you an example. But basically all we need to do is
>>>>> reserve a section for guest ram at a constant virtual address.
>>>>
>>>> Yes, but ideally that would be 0, or something small.
>>>
>>> You can't do 0 because 0 is protected on a good number of OSs. And if it
>>> can't be 0, better use something that makes pointers easy to read.
>>
>> Yes this is one reason for map_sysmem().
>>
>>>
>>>>
>>>>>
>>>>>>> Can't we just simply make sandbox behave like any other target instead?
>>>>>>
>>>>>> Actually that's the goal of the sandbox support. Memory is modelled as
>>>>>> a contiguous chunk starting at 0x0, regardless of what the OS actually
>>>>>> gives U-Boot in terms of addresses.
>>>>>
>>>>> Most platforms don't have RAM start at 0x0 (and making sure nobody
>>>>> assumes it does start at 0 is a good thing). The only bit we need to
>>>>> make sure is that it always starts at *the same* address on every
>>>>> invocation. But if that address is 256MB, things should still be fine.
>>>>
>>>> Yes but putting a 10000000 base address on everything is a bit of a pain.
>>>
>>> Why? It's what we do on arm systems that have ram starting at higher
>>> offsets already.
>>
>> It's a pain because you have to type 1 and 5-6 zeroes before you can
>> get to the address you want. Otherwise sandbox just segfaults, which
>> is annoying.
>
> It's the same as any other device that does not have RAM starting at 0.
> The benefit of it is that you manage to catch NULL pointer accesses
> quite easily, which I guess is something you'll want from a testing target.

You're confusing the U-Boot memory address with the pointer address.
If you use NULL in sandbox it will fault. But address 0 is valid.

>
> Also, you shouldn't use hard addresses in the first place. That's why we
> have $kernel_addr_r and friends. As long as you stick to those, nothing
> should change for you at all.

See for example test_fit.py where it is very useful to know the
addresses we are using.

Of course we can remove this constraint, but it does make things more
painful in sandbox. Also IMO using open casts to convert between
addresses and pointers is not desirable, as they are not really tagged
in any way. With map_sysmem(), etc., they are explicit and obvious.

Regards,
Simon
diff mbox series

Patch

diff --git a/lib/smbios.c b/lib/smbios.c
index df3d26b0710..fc3dabcbc1b 100644
--- a/lib/smbios.c
+++ b/lib/smbios.c
@@ -6,6 +6,7 @@ 
  */
 
 #include <common.h>
+#include <mapmem.h>
 #include <smbios.h>
 #include <tables_csum.h>
 #include <version.h>
@@ -72,9 +73,10 @@  static int smbios_string_table_len(char *start)
 
 static int smbios_write_type0(ulong *current, int handle)
 {
-	struct smbios_type0 *t = (struct smbios_type0 *)*current;
+	struct smbios_type0 *t;
 	int len = sizeof(struct smbios_type0);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type0));
 	fill_smbios_header(t, SMBIOS_BIOS_INFORMATION, len, handle);
 	t->vendor = smbios_add_string(t->eos, "U-Boot");
@@ -101,16 +103,18 @@  static int smbios_write_type0(ulong *current, int handle)
 
 	len = t->length + smbios_string_table_len(t->eos);
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
 
 static int smbios_write_type1(ulong *current, int handle)
 {
-	struct smbios_type1 *t = (struct smbios_type1 *)*current;
+	struct smbios_type1 *t;
 	int len = sizeof(struct smbios_type1);
 	char *serial_str = env_get("serial#");
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type1));
 	fill_smbios_header(t, SMBIOS_SYSTEM_INFORMATION, len, handle);
 	t->manufacturer = smbios_add_string(t->eos, CONFIG_SMBIOS_MANUFACTURER);
@@ -122,15 +126,17 @@  static int smbios_write_type1(ulong *current, int handle)
 
 	len = t->length + smbios_string_table_len(t->eos);
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
 
 static int smbios_write_type2(ulong *current, int handle)
 {
-	struct smbios_type2 *t = (struct smbios_type2 *)*current;
+	struct smbios_type2 *t;
 	int len = sizeof(struct smbios_type2);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type2));
 	fill_smbios_header(t, SMBIOS_BOARD_INFORMATION, len, handle);
 	t->manufacturer = smbios_add_string(t->eos, CONFIG_SMBIOS_MANUFACTURER);
@@ -140,15 +146,17 @@  static int smbios_write_type2(ulong *current, int handle)
 
 	len = t->length + smbios_string_table_len(t->eos);
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
 
 static int smbios_write_type3(ulong *current, int handle)
 {
-	struct smbios_type3 *t = (struct smbios_type3 *)*current;
+	struct smbios_type3 *t;
 	int len = sizeof(struct smbios_type3);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type3));
 	fill_smbios_header(t, SMBIOS_SYSTEM_ENCLOSURE, len, handle);
 	t->manufacturer = smbios_add_string(t->eos, CONFIG_SMBIOS_MANUFACTURER);
@@ -160,6 +168,7 @@  static int smbios_write_type3(ulong *current, int handle)
 
 	len = t->length + smbios_string_table_len(t->eos);
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
@@ -198,9 +207,10 @@  static void smbios_write_type4_dm(struct smbios_type4 *t)
 
 static int smbios_write_type4(ulong *current, int handle)
 {
-	struct smbios_type4 *t = (struct smbios_type4 *)*current;
+	struct smbios_type4 *t;
 	int len = sizeof(struct smbios_type4);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type4));
 	fill_smbios_header(t, SMBIOS_PROCESSOR_INFORMATION, len, handle);
 	t->processor_type = SMBIOS_PROCESSOR_TYPE_CENTRAL;
@@ -214,32 +224,37 @@  static int smbios_write_type4(ulong *current, int handle)
 
 	len = t->length + smbios_string_table_len(t->eos);
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
 
 static int smbios_write_type32(ulong *current, int handle)
 {
-	struct smbios_type32 *t = (struct smbios_type32 *)*current;
+	struct smbios_type32 *t;
 	int len = sizeof(struct smbios_type32);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type32));
 	fill_smbios_header(t, SMBIOS_SYSTEM_BOOT_INFORMATION, len, handle);
 
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
 
 static int smbios_write_type127(ulong *current, int handle)
 {
-	struct smbios_type127 *t = (struct smbios_type127 *)*current;
+	struct smbios_type127 *t;
 	int len = sizeof(struct smbios_type127);
 
+	t = map_sysmem(*current, len);
 	memset(t, 0, sizeof(struct smbios_type127));
 	fill_smbios_header(t, SMBIOS_END_OF_TABLE, len, handle);
 
 	*current += len;
+	unmap_sysmem(t);
 
 	return len;
 }
@@ -268,7 +283,7 @@  ulong write_smbios_table(ulong addr)
 	/* 16 byte align the table address */
 	addr = ALIGN(addr, 16);
 
-	se = (struct smbios_entry *)(uintptr_t)addr;
+	se = map_sysmem(addr, sizeof(struct smbios_entry));
 	memset(se, 0, sizeof(struct smbios_entry));
 
 	addr += sizeof(struct smbios_entry);
@@ -297,6 +312,7 @@  ulong write_smbios_table(ulong addr)
 	isize = sizeof(struct smbios_entry) - SMBIOS_INTERMEDIATE_OFFSET;
 	se->intermediate_checksum = table_compute_checksum(istart, isize);
 	se->checksum = table_compute_checksum(se, sizeof(struct smbios_entry));
+	unmap_sysmem(se);
 
 	return addr;
 }