diff mbox

Add e500 instructions dcblc, dcbtls and dcbtstl as no-op

Message ID 1309180555-3942-1-git-send-email-chouteau@adacore.com
State New
Headers show

Commit Message

Fabien Chouteau June 27, 2011, 1:15 p.m. UTC
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
---
 target-ppc/translate.c |   21 +++++++++++++++++++++
 1 files changed, 21 insertions(+), 0 deletions(-)

Comments

Scott Wood June 27, 2011, 4:28 p.m. UTC | #1
On Mon, 27 Jun 2011 15:15:55 +0200
Fabien Chouteau <chouteau@adacore.com> wrote:

> +/* dcbtls */
> +static void gen_dcbtls(DisasContext *ctx)
> +{
> +    /* interpreted as no-op */
> +}
> +
> +/* dcbtstls */
> +static void gen_dcbtstls(DisasContext *ctx)
> +{
> +    /* interpreted as no-op */
> +}

Set L1CSR0[CUL] (unable to lock)?

-Scott
Fabien Chouteau June 28, 2011, 8:17 a.m. UTC | #2
On 27/06/2011 18:28, Scott Wood wrote:
> On Mon, 27 Jun 2011 15:15:55 +0200
> Fabien Chouteau <chouteau@adacore.com> wrote:
>
>> +/* dcbtls */
>> +static void gen_dcbtls(DisasContext *ctx)
>> +{
>> +    /* interpreted as no-op */
>> +}
>> +
>> +/* dcbtstls */
>> +static void gen_dcbtstls(DisasContext *ctx)
>> +{
>> +    /* interpreted as no-op */
>> +}
>
> Set L1CSR0[CUL] (unable to lock)?

Why do you want to set this bit? Can't we consider that the instruction is
always effective?
Scott Wood June 28, 2011, 4:20 p.m. UTC | #3
On Tue, 28 Jun 2011 10:17:39 +0200
Fabien Chouteau <chouteau@adacore.com> wrote:

> On 27/06/2011 18:28, Scott Wood wrote:
> > On Mon, 27 Jun 2011 15:15:55 +0200
> > Fabien Chouteau <chouteau@adacore.com> wrote:
> >
> >> +/* dcbtls */
> >> +static void gen_dcbtls(DisasContext *ctx)
> >> +{
> >> +    /* interpreted as no-op */
> >> +}
> >> +
> >> +/* dcbtstls */
> >> +static void gen_dcbtstls(DisasContext *ctx)
> >> +{
> >> +    /* interpreted as no-op */
> >> +}
> >
> > Set L1CSR0[CUL] (unable to lock)?
> 
> Why do you want to set this bit? Can't we consider that the instruction is
> always effective?

But it's not.  Why claim it is, in the absence of some specific workload
that needs to be fooled (which could take many different forms, not all of
which are appropriate defaults)?

-Scott
Fabien Chouteau June 30, 2011, 8:25 a.m. UTC | #4
On 28/06/2011 18:20, Scott Wood wrote:
> On Tue, 28 Jun 2011 10:17:39 +0200
> Fabien Chouteau <chouteau@adacore.com> wrote:
> 
>> On 27/06/2011 18:28, Scott Wood wrote:
>>> On Mon, 27 Jun 2011 15:15:55 +0200
>>> Fabien Chouteau <chouteau@adacore.com> wrote:
>>>
>>>> +/* dcbtls */
>>>> +static void gen_dcbtls(DisasContext *ctx)
>>>> +{
>>>> +    /* interpreted as no-op */
>>>> +}
>>>> +
>>>> +/* dcbtstls */
>>>> +static void gen_dcbtstls(DisasContext *ctx)
>>>> +{
>>>> +    /* interpreted as no-op */
>>>> +}
>>>
>>> Set L1CSR0[CUL] (unable to lock)?
>>
>> Why do you want to set this bit? Can't we consider that the instruction is
>> always effective?
> 
> But it's not.  Why claim it is, in the absence of some specific workload
> that needs to be fooled (which could take many different forms, not all of
> which are appropriate defaults)?

Reading the e500 manual again, it's not clear to me what can make the
L1CSR0[CUL] to be set. If you have a better understanding, can you please
explain?
Scott Wood June 30, 2011, 4:17 p.m. UTC | #5
On Thu, 30 Jun 2011 10:25:31 +0200
Fabien Chouteau <chouteau@adacore.com> wrote:

> On 28/06/2011 18:20, Scott Wood wrote:
> > On Tue, 28 Jun 2011 10:17:39 +0200
> > Fabien Chouteau <chouteau@adacore.com> wrote:
> > 
> >> Why do you want to set this bit? Can't we consider that the instruction is
> >> always effective?
> > 
> > But it's not.  Why claim it is, in the absence of some specific workload
> > that needs to be fooled (which could take many different forms, not all of
> > which are appropriate defaults)?
> 
> Reading the e500 manual again, it's not clear to me what can make the
> L1CSR0[CUL] to be set. If you have a better understanding, can you please
> explain?
> 

L1CSR0[CUL] is set whenever a lock set instruction fails to lock
(typically because all ways of the set are already locked).  Since we don't
support cache locking in qemu, these instructions always fail, and thus CUL
should always be set.

Regarding what behavior would maximize compatibility, while it's
conceivable that some program could break if it sees the bit set when it
was expecting to be able to succeed, also consider a program that might
keep locking until it sees the bit set to determine how much cache it can
lock.

-Scott
Alexander Graf June 30, 2011, 9:34 p.m. UTC | #6
On 30.06.2011, at 18:17, Scott Wood wrote:

> On Thu, 30 Jun 2011 10:25:31 +0200
> Fabien Chouteau <chouteau@adacore.com> wrote:
> 
>> On 28/06/2011 18:20, Scott Wood wrote:
>>> On Tue, 28 Jun 2011 10:17:39 +0200
>>> Fabien Chouteau <chouteau@adacore.com> wrote:
>>> 
>>>> Why do you want to set this bit? Can't we consider that the instruction is
>>>> always effective?
>>> 
>>> But it's not.  Why claim it is, in the absence of some specific workload
>>> that needs to be fooled (which could take many different forms, not all of
>>> which are appropriate defaults)?
>> 
>> Reading the e500 manual again, it's not clear to me what can make the
>> L1CSR0[CUL] to be set. If you have a better understanding, can you please
>> explain?
>> 
> 
> L1CSR0[CUL] is set whenever a lock set instruction fails to lock
> (typically because all ways of the set are already locked).  Since we don't
> support cache locking in qemu, these instructions always fail, and thus CUL
> should always be set.
> 
> Regarding what behavior would maximize compatibility, while it's
> conceivable that some program could break if it sees the bit set when it
> was expecting to be able to succeed, also consider a program that might
> keep locking until it sees the bit set to determine how much cache it can
> lock.

We could just keep an internal counter that memorizes how much memory is locked and sets the bit after exceeding the fake cache size.

The only problem I could see remaining is that CAR could potentially fail, as it can access addresses in cache directly that don't have to have underlying RAM mapped. However, I'd hope that only firmware does this and we usually don't execute real firmware in qemu :)

Also, lock set instructions seem to raise DSIs, so we need to generate some loads that don't go anywhere.


Alex
Scott Wood June 30, 2011, 9:46 p.m. UTC | #7
On Thu, 30 Jun 2011 23:34:37 +0200
Alexander Graf <agraf@suse.de> wrote:

> We could just keep an internal counter that memorizes how much memory is locked and sets the bit after exceeding the fake cache size.

And keep track of unlocks, decrementing the counter only if the address was
already locked...  seems better to keep it simple and just be honest about
the failure until a real need for trickery arises.

> The only problem I could see remaining is that CAR could potentially fail, as it can access addresses in cache directly that don't have to have underlying RAM mapped. However, I'd hope that only firmware does this and we usually don't execute real firmware in qemu :)

This is a case where it would be nice for the guest to see the failure
indication, if we're lucky enough that it bothers to check.

But yeah, it's unlikely to happen outside of firmware.

> Also, lock set instructions seem to raise DSIs, so we need to generate some loads that don't go anywhere.

Yes.

Note that dcbtstls is treated as a store, which is a little trickier.

-Scott
Alexander Graf June 30, 2011, 9:56 p.m. UTC | #8
On 30.06.2011, at 23:46, Scott Wood wrote:

> On Thu, 30 Jun 2011 23:34:37 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> We could just keep an internal counter that memorizes how much memory is locked and sets the bit after exceeding the fake cache size.
> 
> And keep track of unlocks, decrementing the counter only if the address was
> already locked...  seems better to keep it simple and just be honest about
> the failure until a real need for trickery arises.
> 
>> The only problem I could see remaining is that CAR could potentially fail, as it can access addresses in cache directly that don't have to have underlying RAM mapped. However, I'd hope that only firmware does this and we usually don't execute real firmware in qemu :)
> 
> This is a case where it would be nice for the guest to see the failure
> indication, if we're lucky enough that it bothers to check.
> 
> But yeah, it's unlikely to happen outside of firmware.
> 
>> Also, lock set instructions seem to raise DSIs, so we need to generate some loads that don't go anywhere.
> 
> Yes.
> 
> Note that dcbtstls is treated as a store, which is a little trickier.

Not that much. It should be enough to simply do:

  st(addr, ld(addr));

no?


Alex
Scott Wood June 30, 2011, 10:11 p.m. UTC | #9
On Thu, 30 Jun 2011 23:56:17 +0200
Alexander Graf <agraf@suse.de> wrote:

> 
> On 30.06.2011, at 23:46, Scott Wood wrote:
> 
> > On Thu, 30 Jun 2011 23:34:37 +0200
> > Alexander Graf <agraf@suse.de> wrote:
> > 
> >> We could just keep an internal counter that memorizes how much memory is locked and sets the bit after exceeding the fake cache size.
> > 
> > And keep track of unlocks, decrementing the counter only if the address was
> > already locked...  seems better to keep it simple and just be honest about
> > the failure until a real need for trickery arises.
> > 
> >> The only problem I could see remaining is that CAR could potentially fail, as it can access addresses in cache directly that don't have to have underlying RAM mapped. However, I'd hope that only firmware does this and we usually don't execute real firmware in qemu :)
> > 
> > This is a case where it would be nice for the guest to see the failure
> > indication, if we're lucky enough that it bothers to check.
> > 
> > But yeah, it's unlikely to happen outside of firmware.
> > 
> >> Also, lock set instructions seem to raise DSIs, so we need to generate some loads that don't go anywhere.
> > 
> > Yes.
> > 
> > Note that dcbtstls is treated as a store, which is a little trickier.
> 
> Not that much. It should be enough to simply do:
> 
>   st(addr, ld(addr));
> 
> no?

Almost, but what if we have write permission but not read?

I assume races among threads are taken care of by some mechanism whereby
qemu only takes an interrupt on target instruction boundaries (does/will
qemu simulate guest SMP?), but what about a race with DMA from the I/O
thread?

-Scott
Alexander Graf June 30, 2011, 10:18 p.m. UTC | #10
On 01.07.2011, at 00:11, Scott Wood wrote:

> On Thu, 30 Jun 2011 23:56:17 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> 
>> On 30.06.2011, at 23:46, Scott Wood wrote:
>> 
>>> On Thu, 30 Jun 2011 23:34:37 +0200
>>> Alexander Graf <agraf@suse.de> wrote:
>>> 
>>>> We could just keep an internal counter that memorizes how much memory is locked and sets the bit after exceeding the fake cache size.
>>> 
>>> And keep track of unlocks, decrementing the counter only if the address was
>>> already locked...  seems better to keep it simple and just be honest about
>>> the failure until a real need for trickery arises.
>>> 
>>>> The only problem I could see remaining is that CAR could potentially fail, as it can access addresses in cache directly that don't have to have underlying RAM mapped. However, I'd hope that only firmware does this and we usually don't execute real firmware in qemu :)
>>> 
>>> This is a case where it would be nice for the guest to see the failure
>>> indication, if we're lucky enough that it bothers to check.
>>> 
>>> But yeah, it's unlikely to happen outside of firmware.
>>> 
>>>> Also, lock set instructions seem to raise DSIs, so we need to generate some loads that don't go anywhere.
>>> 
>>> Yes.
>>> 
>>> Note that dcbtstls is treated as a store, which is a little trickier.
>> 
>> Not that much. It should be enough to simply do:
>> 
>>  st(addr, ld(addr));
>> 
>> no?
> 
> Almost, but what if we have write permission but not read?

How would you write back data from a cache line when you haven't read it earlier?

> I assume races among threads are taken care of by some mechanism whereby
> qemu only takes an interrupt on target instruction boundaries (does/will
> qemu simulate guest SMP?),

Qemu generates SMP by rescheduling guest CPUs on translation block boundaries. So instructions are always guaranteed to be atomic. Of course, that means only a single host CPU is used.

There has been some work by different people to get rid of that limitation, but so far none has proven to be generic enough.

> but what about a race with DMA from the I/O thread?

That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).


Alex
Scott Wood June 30, 2011, 10:23 p.m. UTC | #11
On Fri, 1 Jul 2011 00:18:22 +0200
Alexander Graf <agraf@suse.de> wrote:

> 
> On 01.07.2011, at 00:11, Scott Wood wrote:
> 
> > Almost, but what if we have write permission but not read?
> 
> How would you write back data from a cache line when you haven't read it earlier?

The CPU can read it.  The program can't.

> > but what about a race with DMA from the I/O thread?
> 
> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).

Real hardware doesn't generate a load/store sequence that the program didn't
ask for -- where's the breakage?

-Scott
Alexander Graf June 30, 2011, 10:28 p.m. UTC | #12
On 01.07.2011, at 00:23, Scott Wood wrote:

> On Fri, 1 Jul 2011 00:18:22 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> 
>> On 01.07.2011, at 00:11, Scott Wood wrote:
>> 
>>> Almost, but what if we have write permission but not read?
>> 
>> How would you write back data from a cache line when you haven't read it earlier?
> 
> The CPU can read it.  The program can't.

Hrm. We can always just call the check manually and trigger the respective interrupt :).

>>> but what about a race with DMA from the I/O thread?
>> 
>> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
> 
> Real hardware doesn't generate a load/store sequence that the program didn't
> ask for -- where's the breakage?

Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?


Alex
Scott Wood June 30, 2011, 10:32 p.m. UTC | #13
On Fri, 1 Jul 2011 00:28:19 +0200
Alexander Graf <agraf@suse.de> wrote:

> 
> On 01.07.2011, at 00:23, Scott Wood wrote:
> 
> > On Fri, 1 Jul 2011 00:18:22 +0200
> > Alexander Graf <agraf@suse.de> wrote:
> > 
> >> 
> >> On 01.07.2011, at 00:11, Scott Wood wrote:
> >> 
> >>> Almost, but what if we have write permission but not read?
> >> 
> >> How would you write back data from a cache line when you haven't read it earlier?
> > 
> > The CPU can read it.  The program can't.
> 
> Hrm. We can always just call the check manually and trigger the respective interrupt :).

Yep.  A little trickier, but doable.

> >>> but what about a race with DMA from the I/O thread?
> >> 
> >> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
> > 
> > Real hardware doesn't generate a load/store sequence that the program didn't
> > ask for -- where's the breakage?
> 
> Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?

If the DMA happens after the cache line is fetched, it'll be flushed,
whether locked or not.  But that's different from losing some of what the
device wrote.

-Scott
Alexander Graf June 30, 2011, 10:38 p.m. UTC | #14
On 01.07.2011, at 00:32, Scott Wood wrote:

> On Fri, 1 Jul 2011 00:28:19 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> 
>> On 01.07.2011, at 00:23, Scott Wood wrote:
>> 
>>> On Fri, 1 Jul 2011 00:18:22 +0200
>>> Alexander Graf <agraf@suse.de> wrote:
>>> 
>>>> 
>>>> On 01.07.2011, at 00:11, Scott Wood wrote:
>>>> 
>>>>> Almost, but what if we have write permission but not read?
>>>> 
>>>> How would you write back data from a cache line when you haven't read it earlier?
>>> 
>>> The CPU can read it.  The program can't.
>> 
>> Hrm. We can always just call the check manually and trigger the respective interrupt :).
> 
> Yep.  A little trickier, but doable.
> 
>>>>> but what about a race with DMA from the I/O thread?
>>>> 
>>>> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
>>> 
>>> Real hardware doesn't generate a load/store sequence that the program didn't
>>> ask for -- where's the breakage?
>> 
>> Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?
> 
> If the DMA happens after the cache line is fetched, it'll be flushed,
> whether locked or not.  But that's different from losing some of what the
> device wrote.

Ah, so DMA flushes even locked cache lines? Then it makes sense. Well, I guess the best choice here really is to merely do a manual storage protection check and be done with it.


Alex
Fabien Chouteau July 1, 2011, 2:59 p.m. UTC | #15
On 01/07/2011 00:38, Alexander Graf wrote:
> 
> On 01.07.2011, at 00:32, Scott Wood wrote:
> 
>> On Fri, 1 Jul 2011 00:28:19 +0200
>> Alexander Graf <agraf@suse.de> wrote:
>>
>>>
>>> On 01.07.2011, at 00:23, Scott Wood wrote:
>>>
>>>> On Fri, 1 Jul 2011 00:18:22 +0200
>>>> Alexander Graf <agraf@suse.de> wrote:
>>>>
>>>>>
>>>>> On 01.07.2011, at 00:11, Scott Wood wrote:
>>>>>
>>>>>> Almost, but what if we have write permission but not read?
>>>>>
>>>>> How would you write back data from a cache line when you haven't read it earlier?
>>>>
>>>> The CPU can read it.  The program can't.
>>>
>>> Hrm. We can always just call the check manually and trigger the respective interrupt :).
>>
>> Yep.  A little trickier, but doable.
>>
>>>>>> but what about a race with DMA from the I/O thread?
>>>>>
>>>>> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
>>>>
>>>> Real hardware doesn't generate a load/store sequence that the program didn't
>>>> ask for -- where's the breakage?
>>>
>>> Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?
>>
>> If the DMA happens after the cache line is fetched, it'll be flushed,
>> whether locked or not.  But that's different from losing some of what the
>> device wrote.
> 
> Ah, so DMA flushes even locked cache lines? Then it makes sense. Well, I guess the best choice here really is to merely do a manual storage protection check and be done with it.
> 

Well, this is far beyond the scope of my knowledge of e500 and the current
patch is sufficient for me, so I will let you implement this if you want to...
Alexander Graf July 1, 2011, 3:05 p.m. UTC | #16
On 01.07.2011, at 16:59, Fabien Chouteau wrote:

> On 01/07/2011 00:38, Alexander Graf wrote:
>> 
>> On 01.07.2011, at 00:32, Scott Wood wrote:
>> 
>>> On Fri, 1 Jul 2011 00:28:19 +0200
>>> Alexander Graf <agraf@suse.de> wrote:
>>> 
>>>> 
>>>> On 01.07.2011, at 00:23, Scott Wood wrote:
>>>> 
>>>>> On Fri, 1 Jul 2011 00:18:22 +0200
>>>>> Alexander Graf <agraf@suse.de> wrote:
>>>>> 
>>>>>> 
>>>>>> On 01.07.2011, at 00:11, Scott Wood wrote:
>>>>>> 
>>>>>>> Almost, but what if we have write permission but not read?
>>>>>> 
>>>>>> How would you write back data from a cache line when you haven't read it earlier?
>>>>> 
>>>>> The CPU can read it.  The program can't.
>>>> 
>>>> Hrm. We can always just call the check manually and trigger the respective interrupt :).
>>> 
>>> Yep.  A little trickier, but doable.
>>> 
>>>>>>> but what about a race with DMA from the I/O thread?
>>>>>> 
>>>>>> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
>>>>> 
>>>>> Real hardware doesn't generate a load/store sequence that the program didn't
>>>>> ask for -- where's the breakage?
>>>> 
>>>> Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?
>>> 
>>> If the DMA happens after the cache line is fetched, it'll be flushed,
>>> whether locked or not.  But that's different from losing some of what the
>>> device wrote.
>> 
>> Ah, so DMA flushes even locked cache lines? Then it makes sense. Well, I guess the best choice here really is to merely do a manual storage protection check and be done with it.
>> 
> 
> Well, this is far beyond the scope of my knowledge of e500 and the current
> patch is sufficient for me, so I will let you implement this if you want to...

Well, all it needs is to call mmubooke206_get_physical_address on the address (or each page of the destination) with access_type set to write and check for the result. If it's protected, inject a DSI (see cpu_ppc_handle_mmu_fault).

But yeah, I can try and see if I get around to implementing it. No promises though. Do you have a test case I can verify this against?


Alex
Fabien Chouteau July 1, 2011, 3:39 p.m. UTC | #17
On 01/07/2011 17:05, Alexander Graf wrote:
>
> On 01.07.2011, at 16:59, Fabien Chouteau wrote:
>
>> On 01/07/2011 00:38, Alexander Graf wrote:
>>>
>>> On 01.07.2011, at 00:32, Scott Wood wrote:
>>>
>>>> On Fri, 1 Jul 2011 00:28:19 +0200
>>>> Alexander Graf <agraf@suse.de> wrote:
>>>>
>>>>>
>>>>> On 01.07.2011, at 00:23, Scott Wood wrote:
>>>>>
>>>>>> On Fri, 1 Jul 2011 00:18:22 +0200
>>>>>> Alexander Graf <agraf@suse.de> wrote:
>>>>>>
>>>>>>>
>>>>>>> On 01.07.2011, at 00:11, Scott Wood wrote:
>>>>>>>
>>>>>>>> Almost, but what if we have write permission but not read?
>>>>>>>
>>>>>>> How would you write back data from a cache line when you haven't read it earlier?
>>>>>>
>>>>>> The CPU can read it.  The program can't.
>>>>>
>>>>> Hrm. We can always just call the check manually and trigger the respective interrupt :).
>>>>
>>>> Yep.  A little trickier, but doable.
>>>>
>>>>>>>> but what about a race with DMA from the I/O thread?
>>>>>>>
>>>>>>> That'd simply be broken, but I don't quite see how it wouldn't with real hardware either :).
>>>>>>
>>>>>> Real hardware doesn't generate a load/store sequence that the program didn't
>>>>>> ask for -- where's the breakage?
>>>>>
>>>>> Real hardware flushes whatever contents are in that cache line to RAM, no? So it would collide with the DMA just as much. Or am I missing something?
>>>>
>>>> If the DMA happens after the cache line is fetched, it'll be flushed,
>>>> whether locked or not.  But that's different from losing some of what the
>>>> device wrote.
>>>
>>> Ah, so DMA flushes even locked cache lines? Then it makes sense. Well, I guess the best choice here really is to merely do a manual storage protection check and be done with it.
>>>
>>
>> Well, this is far beyond the scope of my knowledge of e500 and the current
>> patch is sufficient for me, so I will let you implement this if you want to...
>
> Well, all it needs is to call mmubooke206_get_physical_address on the address (or each page of the destination) with access_type set to write and check for the result. If it's protected, inject a DSI (see cpu_ppc_handle_mmu_fault).
>
> But yeah, I can try and see if I get around to implementing it. No promises though. Do you have a test case I can verify this against?

No, I just implemented these no-oped instructions to get rid of an illegal
instruction exception while running u-boot on my emulated p2010.
Scott Wood July 1, 2011, 4 p.m. UTC | #18
On Fri, 1 Jul 2011 17:39:49 +0200
Fabien Chouteau <chouteau@adacore.com> wrote:

> No, I just implemented these no-oped instructions to get rid of an illegal
> instruction exception while running u-boot on my emulated p2010.
> 

Heh, so someone *is* trying to run firmware. :-)

It would take a lot of low-level hardware simulation to get unmodified
U-Boot to run -- probably not worth it unless your goal is an environment
for debugging U-Boot.

It uses cache locking to create working space before the memory controller
is initialized, so it's really expecting those instructions to work.

-Scott
Fabien Chouteau July 1, 2011, 4:21 p.m. UTC | #19
On 01/07/2011 18:00, Scott Wood wrote:
> On Fri, 1 Jul 2011 17:39:49 +0200
> Fabien Chouteau <chouteau@adacore.com> wrote:
> 
>> No, I just implemented these no-oped instructions to get rid of an illegal
>> instruction exception while running u-boot on my emulated p2010.
>>
> 
> Heh, so someone *is* trying to run firmware. :-)
> 
> It would take a lot of low-level hardware simulation to get unmodified
> U-Boot to run -- probably not worth it unless your goal is an environment
> for debugging U-Boot.
> 
> It uses cache locking to create working space before the memory controller
> is initialized, so it's really expecting those instructions to work.

It was mainly to test the correctness of emulation, but it appears that I have
to implement more devices (eSPI, I2C, LBC...) than I initially thought.
diff mbox

Patch

diff --git a/target-ppc/translate.c b/target-ppc/translate.c
index 0a03b44..d0ca821 100644
--- a/target-ppc/translate.c
+++ b/target-ppc/translate.c
@@ -3995,6 +3995,24 @@  static void gen_dcbt(DisasContext *ctx)
      */
 }
 
+/* dcblc */
+static void gen_dcblc(DisasContext *ctx)
+{
+    /* interpreted as no-op */
+}
+
+/* dcbtls */
+static void gen_dcbtls(DisasContext *ctx)
+{
+    /* interpreted as no-op */
+}
+
+/* dcbtstls */
+static void gen_dcbtstls(DisasContext *ctx)
+{
+    /* interpreted as no-op */
+}
+
 /* dcbtst */
 static void gen_dcbtst(DisasContext *ctx)
 {
@@ -8404,6 +8422,9 @@  GEN_HANDLER(dcbi, 0x1F, 0x16, 0x0E, 0x03E00001, PPC_CACHE),
 GEN_HANDLER(dcbst, 0x1F, 0x16, 0x01, 0x03E00001, PPC_CACHE),
 GEN_HANDLER(dcbt, 0x1F, 0x16, 0x08, 0x02000001, PPC_CACHE),
 GEN_HANDLER(dcbtst, 0x1F, 0x16, 0x07, 0x02000001, PPC_CACHE),
+GEN_HANDLER(dcblc, 0x1F, 0x6, 0x0C, 0x02000001, PPC_CACHE),
+GEN_HANDLER(dcbtls, 0x1F, 0x6, 0x05, 0x02000001, PPC_CACHE),
+GEN_HANDLER(dcbtstls, 0x1F, 0x6, 0x04, 0x02000001, PPC_CACHE),
 GEN_HANDLER(dcbz, 0x1F, 0x16, 0x1F, 0x03E00001, PPC_CACHE_DCBZ),
 GEN_HANDLER2(dcbz_970, "dcbz", 0x1F, 0x16, 0x1F, 0x03C00001, PPC_CACHE_DCBZT),
 GEN_HANDLER(dst, 0x1F, 0x16, 0x0A, 0x01800001, PPC_ALTIVEC),