diff mbox series

[v4,4/5] board: add nvidia jetson tx2 support

Message ID 20201119075328.8599-4-christian@paral.in
State Changes Requested, archived
Headers show
Series [v4,1/5] package/nvidia-modprobe: new package | expand

Commit Message

Christian Stewart Nov. 19, 2020, 7:53 a.m. UTC
Tested-by: Asaf Kahlon <asafka7@gmail.com>
Signed-off-by: Christian Stewart <christian@paral.in>

---

v3 -> v4:

 - thanks Romain for the review
 - cjs: added gcc + binutils version specifiers
 - tested against devkit hardware

Signed-off-by: Christian Stewart <christian@paral.in>
---
 board/jetson/tx2/readme.txt | 83 +++++++++++++++++++++++++++++++++++++
 board/jetsontx2             |  1 +
 configs/jetsontx2_defconfig | 64 ++++++++++++++++++++++++++++
 3 files changed, 148 insertions(+)
 create mode 100644 board/jetson/tx2/readme.txt
 create mode 120000 board/jetsontx2
 create mode 100644 configs/jetsontx2_defconfig

Comments

Romain Naour Nov. 19, 2020, 1:40 p.m. UTC | #1
Hello Christian,

Le 19/11/2020 à 08:53, Christian Stewart a écrit :
> Tested-by: Asaf Kahlon <asafka7@gmail.com>
> Signed-off-by: Christian Stewart <christian@paral.in>
> 
> ---
> 
> v3 -> v4:
> 
>  - thanks Romain for the review
>  - cjs: added gcc + binutils version specifiers
>  - tested against devkit hardware
> 
> Signed-off-by: Christian Stewart <christian@paral.in>
> ---
>  board/jetson/tx2/readme.txt | 83 +++++++++++++++++++++++++++++++++++++
>  board/jetsontx2             |  1 +
>  configs/jetsontx2_defconfig | 64 ++++++++++++++++++++++++++++
>  3 files changed, 148 insertions(+)
>  create mode 100644 board/jetson/tx2/readme.txt
>  create mode 120000 board/jetsontx2
>  create mode 100644 configs/jetsontx2_defconfig
> 
> diff --git a/board/jetson/tx2/readme.txt b/board/jetson/tx2/readme.txt
> new file mode 100644
> index 0000000000..45fdb50a56
> --- /dev/null
> +++ b/board/jetson/tx2/readme.txt
> @@ -0,0 +1,83 @@
> +NVIDIA Jetson TX2
> +
> +Intro
> +=====
> +
> +This configuration supports the Jetson TX2 devkit.
> +
> +Building
> +========
> +
> +Configure Buildroot
> +-------------------
> +
> +For Jetson TX2:
> +
> +  $ make jetsontx2_defconfig
> +
> +Build the rootfs
> +----------------
> +
> +You may now build your rootfs with:
> +
> +  $ make
> +
> +
> +Flashing
> +========
> +
> +Once the build process is finished you will have the target binaries in the
> +output/images directory, with a copy of linux4tegra.
> +
> +Flashing to the internal eMMC is done by booting to the official recovery mode,
> +and flashing the system from there. The default factory-flashed TX2 is suitable.
> +
> +There are a lot of cases where the TX2 will not boot properly unless all of the
> +peripherals are fully disconnected, power is disconnected, everything fully
> +resets, and then the power is introduced back again.
> +
> +The recovery mode of the Jetson is used to flash. Entering recovery:
> +
> + - Start with the machine powered off + fully unplugged.
> + - Plug in the device to power, and connect a HDMI display.
> + - Connect a micro-USB cable from the host PC to the target board.
> + - Power on the device by holding the start button until the red light is lit.
> + - Hold down the RST button and REC button simultaneously.
> + - Release the RST button while holding down the REC button.
> + - Wait a few seconds, then release the REC button.
> +
> +To flash over USB:
> +
> +```
> +cd output/images/linux4tegra
> +sudo bash ./flash.sh \
> +     -I ../rootfs.ext2 \
> +     -K ../Image \
> +     -L ../u-boot-dtb.bin \
> +     -C "ramdisk_size=100000 net.ifnames=0 elevator=deadline" \
> +     -d ../tegra186-quill-p3310-1000-c03-00-base.dtb \
> +     jetson-tx2-devkit mmcblk0p1
> +```
> +
> +This will run the `flash.sh` script from L4T, and will setup the kernel, u-boot,
> +persist + boot-up partition mmcblk0p1. This may overwrite your existing work so
> +use it for initial setup only.
> +
> +Bootup Process
> +==============
> +
> +The TX2 boots from the internal eMMC, at mmcblk0p1.
> +
> +A "secure boot" process is used, with multiple bootloaders:
> +
> + - BootROM -> MB1 (TrustZone)
> + - MB2/BPMP -> (Non-Trustzone)
> + - Cboot (uses Little Kernel)
> + - Uboot
> + - Kernel
> + 
> +Uboot is flashed to the mmcblk0p1 emmc partition.
> +
> +Cboot could be compiled from source, and the source is available from the
> +official sources, however, we do not (yet) compile cboot.
> +
> diff --git a/board/jetsontx2 b/board/jetsontx2
> new file mode 120000
> index 0000000000..7404114cc3
> --- /dev/null
> +++ b/board/jetsontx2
> @@ -0,0 +1 @@
> +./jetson/tx2
> \ No newline at end of file
> diff --git a/configs/jetsontx2_defconfig b/configs/jetsontx2_defconfig
> new file mode 100644
> index 0000000000..5ca832524e
> --- /dev/null
> +++ b/configs/jetsontx2_defconfig
> @@ -0,0 +1,64 @@
> +BR2_aarch64=y
> +BR2_cortex_a57=y
> +BR2_ARM_FPU_FP_ARMV8=y
> +
> +# enable specific optimizations
> +BR2_TARGET_OPTIMIZATION="-march=armv8-a+crypto -mcpu=cortex-a57+crypto"
> +
> +# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package Toolchain"
> +BR2_TOOLCHAIN_BUILDROOT=y
> +BR2_TOOLCHAIN_BUILDROOT_CXX=y
> +BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
> +BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
> +BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
> +BR2_BINUTILS_VERSION_2_32_X=y
> +BR2_GCC_VERSION_7_X=y

This means that you are not working on Buildroot master because gcc 7 has been
removed already.

This is anoying... either the latest NVIDIA SDK (jetpack 4.4.1) is already out
of date because it require an old gcc version or gcc is moving too fast for such
sdk.

Your work on this series must be discussed with other maintainers.
Wait some time before sending v5.

> +BR2_GCC_ENABLE_LTO=n

Should be:
# BR2_GCC_ENABLE_LTO is not set

Best regards,
Romain


> +BR2_USE_MMU=y
> +
> +BR2_SYSTEM_DHCP="eth0"
> +
> +# Linux headers same as kernel, a 4.9 series
> +BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
> +BR2_KERNEL_HEADERS_AS_KERNEL=y
> +
> +BR2_LINUX_KERNEL=y
> +BR2_LINUX_KERNEL_CUSTOM_TARBALL=y
> +# patches-l4t-r32.4
> +BR2_LINUX_KERNEL_CUSTOM_TARBALL_LOCATION="$(call github,madisongh,linux-tegra-4.9,0be1a57448010ae60505acf4e2153638455cee7c)/linux-tegra-4.9.140-r1.tar.gz"
> +BR2_LINUX_KERNEL_DEFCONFIG="tegra"
> +
> +# Build the DTB from the kernel sources
> +BR2_LINUX_KERNEL_DTS_SUPPORT=y
> +BR2_LINUX_KERNEL_INTREE_DTS_NAME="_ddot_/_ddot_/_ddot_/_ddot_/nvidia/platform/t18x/quill/kernel-dts/tegra186-quill-p3310-1000-c03-00-base"
> +
> +BR2_LINUX_KERNEL_NEEDS_HOST_OPENSSL=y
> +
> +BR2_PACKAGE_LINUX4TEGRA=y
> +BR2_PACKAGE_LINUX4TEGRA_PLATFORM_T186REF=y
> +
> +# TODO: NVIDIA_CONTAINER_TOOLKIT requires a go-module integration.
> +# BR2_PACKAGE_NVIDIA_CONTAINER_TOOLKIT=y
> +
> +BR2_PACKAGE_LINUX_FIRMWARE=y
> +BR2_PACKAGE_LINUX_FIRMWARE_RTL_88XX=y
> +
> +# Required tools to create the image
> +BR2_PACKAGE_HOST_DOSFSTOOLS=y
> +BR2_PACKAGE_HOST_JQ=y
> +BR2_PACKAGE_HOST_PARTED=y
> +
> +# Filesystem / image
> +BR2_TARGET_ROOTFS_EXT2=y
> +BR2_TARGET_ROOTFS_EXT2_4=y
> +BR2_TARGET_ROOTFS_EXT2_SIZE="2000M"
> +# BR2_TARGET_ROOTFS_TAR is not set
> +
> +# Uboot
> +BR2_TARGET_UBOOT=y
> +BR2_TARGET_UBOOT_BOARD_DEFCONFIG="p2771-0000-500"
> +BR2_TARGET_UBOOT_BUILD_SYSTEM_KCONFIG=y
> +BR2_TARGET_UBOOT_CUSTOM_TARBALL=y
> +BR2_TARGET_UBOOT_CUSTOM_TARBALL_LOCATION="$(call github,paralin,u-boot-tegra,e6da093be3cc593ef4294e1922b3391ede9c94da)/u-boot-tegra-l4t-r32.4-v2016.7.tar.gz"
> +BR2_TARGET_UBOOT_FORMAT_DTB_BIN=y
> +BR2_TARGET_UBOOT_NEEDS_DTC=y
>
Romain Naour Nov. 21, 2020, 10:06 a.m. UTC | #2
Hello Christian,

Le 19/11/2020 à 14:40, Romain Naour a écrit :
> Hello Christian,
> 
> Le 19/11/2020 à 08:53, Christian Stewart a écrit :
>> Tested-by: Asaf Kahlon <asafka7@gmail.com>
>> Signed-off-by: Christian Stewart <christian@paral.in>
>>
>> ---
>>
>> v3 -> v4:
>>
>>  - thanks Romain for the review
>>  - cjs: added gcc + binutils version specifiers
>>  - tested against devkit hardware
>>
>> Signed-off-by: Christian Stewart <christian@paral.in>
>> ---
>>  board/jetson/tx2/readme.txt | 83 +++++++++++++++++++++++++++++++++++++
>>  board/jetsontx2             |  1 +
>>  configs/jetsontx2_defconfig | 64 ++++++++++++++++++++++++++++
>>  3 files changed, 148 insertions(+)
>>  create mode 100644 board/jetson/tx2/readme.txt
>>  create mode 120000 board/jetsontx2
>>  create mode 100644 configs/jetsontx2_defconfig
>>

[...]

>> +++ b/configs/jetsontx2_defconfig
>> @@ -0,0 +1,64 @@
>> +BR2_aarch64=y
>> +BR2_cortex_a57=y
>> +BR2_ARM_FPU_FP_ARMV8=y
>> +
>> +# enable specific optimizations
>> +BR2_TARGET_OPTIMIZATION="-march=armv8-a+crypto -mcpu=cortex-a57+crypto"
>> +
>> +# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package Toolchain"
>> +BR2_TOOLCHAIN_BUILDROOT=y
>> +BR2_TOOLCHAIN_BUILDROOT_CXX=y
>> +BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
>> +BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
>> +BR2_TOOLCHAIN_BUILDROOT_LOCALE=y

Glibc toolchains already select locale and whar support.

>> +BR2_BINUTILS_VERSION_2_32_X=y

I'm not sure why you need this version.

>> +BR2_GCC_VERSION_7_X=y
> 
> This means that you are not working on Buildroot master because gcc 7 has been
> removed already.
> 
> This is anoying... either the latest NVIDIA SDK (jetpack 4.4.1) is already out
> of date because it require an old gcc version or gcc is moving too fast for such
> sdk.
> 
> Your work on this series must be discussed with other maintainers.
> Wait some time before sending v5.

Actually I would suggest to use the Linaro aarch64 2018.05
(BR2_TOOLCHAIN_EXTERNAL_LINARO_AARCH64) because it's the toolchain recommended
and used by Nvidia.

See: https://developer.nvidia.com/gcc-linaro-731-201805-sources

> 
>> +BR2_GCC_ENABLE_LTO=n
> 
> Should be:
> # BR2_GCC_ENABLE_LTO is not set
> 
> Best regards,
> Romain
> 
> 
>> +BR2_USE_MMU=y
>> +
>> +BR2_SYSTEM_DHCP="eth0"
>> +
>> +# Linux headers same as kernel, a 4.9 series
>> +BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
>> +BR2_KERNEL_HEADERS_AS_KERNEL=y

We don't need this if we use the Linaro toolchain.

>> +
>> +BR2_LINUX_KERNEL=y
>> +BR2_LINUX_KERNEL_CUSTOM_TARBALL=y
>> +# patches-l4t-r32.4
>> +BR2_LINUX_KERNEL_CUSTOM_TARBALL_LOCATION="$(call github,madisongh,linux-tegra-4.9,0be1a57448010ae60505acf4e2153638455cee7c)/linux-tegra-4.9.140-r1.tar.gz"
>> +BR2_LINUX_KERNEL_DEFCONFIG="tegra"
>> +
>> +# Build the DTB from the kernel sources
>> +BR2_LINUX_KERNEL_DTS_SUPPORT=y
>> +BR2_LINUX_KERNEL_INTREE_DTS_NAME="_ddot_/_ddot_/_ddot_/_ddot_/nvidia/platform/t18x/quill/kernel-dts/tegra186-quill-p3310-1000-c03-00-base"
>> +
>> +BR2_LINUX_KERNEL_NEEDS_HOST_OPENSSL=y
>> +
>> +BR2_PACKAGE_LINUX4TEGRA=y
>> +BR2_PACKAGE_LINUX4TEGRA_PLATFORM_T186REF=y
>> +
>> +# TODO: NVIDIA_CONTAINER_TOOLKIT requires a go-module integration.
>> +# BR2_PACKAGE_NVIDIA_CONTAINER_TOOLKIT=y
>> +
>> +BR2_PACKAGE_LINUX_FIRMWARE=y
>> +BR2_PACKAGE_LINUX_FIRMWARE_RTL_88XX=y
>> +
>> +# Required tools to create the image
>> +BR2_PACKAGE_HOST_DOSFSTOOLS=y
>> +BR2_PACKAGE_HOST_JQ=y
>> +BR2_PACKAGE_HOST_PARTED=y
>> +
>> +# Filesystem / image
>> +BR2_TARGET_ROOTFS_EXT2=y
>> +BR2_TARGET_ROOTFS_EXT2_4=y
>> +BR2_TARGET_ROOTFS_EXT2_SIZE="2000M"

This is huge, here target directory is only 215Mo after building this defconfig.

[target]$ du -hs .
215M

Best regards,
Romain

>> +# BR2_TARGET_ROOTFS_TAR is not set
>> +
>> +# Uboot
>> +BR2_TARGET_UBOOT=y
>> +BR2_TARGET_UBOOT_BOARD_DEFCONFIG="p2771-0000-500"
>> +BR2_TARGET_UBOOT_BUILD_SYSTEM_KCONFIG=y
>> +BR2_TARGET_UBOOT_CUSTOM_TARBALL=y
>> +BR2_TARGET_UBOOT_CUSTOM_TARBALL_LOCATION="$(call github,paralin,u-boot-tegra,e6da093be3cc593ef4294e1922b3391ede9c94da)/u-boot-tegra-l4t-r32.4-v2016.7.tar.gz"
>> +BR2_TARGET_UBOOT_FORMAT_DTB_BIN=y
>> +BR2_TARGET_UBOOT_NEEDS_DTC=y
>>
> 
> _______________________________________________
> buildroot mailing list
> buildroot@busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
>
Christian Stewart Nov. 21, 2020, 9:12 p.m. UTC | #3
hi Romain,


On Sat, Nov 21, 2020 at 2:06 AM Romain Naour <romain.naour@gmail.com> wrote:
>
> Actually I would suggest to use the Linaro aarch64 2018.05
> (BR2_TOOLCHAIN_EXTERNAL_LINARO_AARCH64) because it's the toolchain recommended
> and used by Nvidia.
>
> See: https://developer.nvidia.com/gcc-linaro-731-201805-sources

When using this EXTERNAL_LINARO_AARCH64 I'm getting the build error -

libnvidia-container-1.2.0/src/drivera-container-1.2.0/src/nvc_ldcache.lo]
Error 1
   10 | #include <rpc/rpc.h>

Even with libtirpc in the dependencies. So I guess two things:

 - rpc.h is coming from the toolchain
 - linaro toolchain doesn't have it (?)
 - libtirpc is probably unnecessary since it's not actually being used

How to fix?

Best,
Christian
Peter Seiderer Nov. 21, 2020, 10:12 p.m. UTC | #4
Hello Christian,

On Sat, 21 Nov 2020 13:12:46 -0800, Christian Stewart <christian@paral.in> wrote:

> hi Romain,
>
>
> On Sat, Nov 21, 2020 at 2:06 AM Romain Naour <romain.naour@gmail.com> wrote:
> >
> > Actually I would suggest to use the Linaro aarch64 2018.05
> > (BR2_TOOLCHAIN_EXTERNAL_LINARO_AARCH64) because it's the toolchain recommended
> > and used by Nvidia.
> >
> > See: https://developer.nvidia.com/gcc-linaro-731-201805-sources
>
> When using this EXTERNAL_LINARO_AARCH64 I'm getting the build error -
>
> libnvidia-container-1.2.0/src/drivera-container-1.2.0/src/nvc_ldcache.lo]
> Error 1
>    10 | #include <rpc/rpc.h>
>
> Even with libtirpc in the dependencies. So I guess two things:
>
>  - rpc.h is coming from the toolchain
>  - linaro toolchain doesn't have it (?)
>  - libtirpc is probably unnecessary since it's not actually being used
>
> How to fix?

The libtirpc include files are installed to ./host/aarch64-buildroot-linux-gnu/sysroot/usr/include/tirpc,
e.g.:
	./host/aarch64-buildroot-linux-gnu/sysroot/usr/include/tirpc/rpc/rpc.h

Add the /usr/include/tirpc directory to the libnvidia-container compile header search
path...

Regards,
Peter

>
> Best,
> Christian
> _______________________________________________
> buildroot mailing list
> buildroot@busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
Romain Naour Nov. 21, 2020, 11:47 p.m. UTC | #5
Le 21/11/2020 à 22:12, Christian Stewart a écrit :
> hi Romain,
> 
> 
> On Sat, Nov 21, 2020 at 2:06 AM Romain Naour <romain.naour@gmail.com> wrote:
>>
>> Actually I would suggest to use the Linaro aarch64 2018.05
>> (BR2_TOOLCHAIN_EXTERNAL_LINARO_AARCH64) because it's the toolchain recommended
>> and used by Nvidia.
>>
>> See: https://developer.nvidia.com/gcc-linaro-731-201805-sources
> 
> When using this EXTERNAL_LINARO_AARCH64 I'm getting the build error -
> 
> libnvidia-container-1.2.0/src/drivera-container-1.2.0/src/nvc_ldcache.lo]
> Error 1
>    10 | #include <rpc/rpc.h>
> 
> Even with libtirpc in the dependencies. So I guess two things:
> 
>  - rpc.h is coming from the toolchain
>  - linaro toolchain doesn't have it (?)
>  - libtirpc is probably unnecessary since it's not actually being used
> 
> How to fix?

The Linaro toolchain provide /usr/include/rpc/rpc.h in staging directory.

libnvidia-container build fine here.

Best regards,
Romain

> 
> Best,
> Christian
>
Christian Stewart Nov. 23, 2020, 11:07 p.m. UTC | #6
Hi Romain,

On Thu, Nov 19, 2020 at 5:40 AM Romain Naour <romain.naour@smile.fr> wrote:
> > +# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package Toolchain"
> > +BR2_TOOLCHAIN_BUILDROOT=y
> > +BR2_TOOLCHAIN_BUILDROOT_CXX=y
> > +BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
> > +BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
> > +BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
> > +BR2_BINUTILS_VERSION_2_32_X=y
> > +BR2_GCC_VERSION_7_X=y
>
> This means that you are not working on Buildroot master because gcc 7 has been
> removed already.
>
> This is anoying... either the latest NVIDIA SDK (jetpack 4.4.1) is already out
> of date because it require an old gcc version or gcc is moving too fast for such
> sdk.

I have tested this against GCC 8 and Buildroot 2020.08.x:

BR2_TOOLCHAIN_BUILDROOT=y
BR2_TOOLCHAIN_BUILDROOT_CXX=y
BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
BR2_BINUTILS_VERSION_2_32_X=y
BR2_GCC_VERSION_8_X=y
BR2_USE_MMU=y

... and all works fine. Why do you think it won't work with GCC 8?

Best,
Christian
Romain Naour Nov. 24, 2020, 2:46 p.m. UTC | #7
Hello Christian,

Le 24/11/2020 à 00:07, Christian Stewart a écrit :
> Hi Romain,
> 
> On Thu, Nov 19, 2020 at 5:40 AM Romain Naour <romain.naour@smile.fr> wrote:
>>> +# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package Toolchain"
>>> +BR2_TOOLCHAIN_BUILDROOT=y
>>> +BR2_TOOLCHAIN_BUILDROOT_CXX=y
>>> +BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
>>> +BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
>>> +BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
>>> +BR2_BINUTILS_VERSION_2_32_X=y
>>> +BR2_GCC_VERSION_7_X=y
>>
>> This means that you are not working on Buildroot master because gcc 7 has been
>> removed already.
>>
>> This is anoying... either the latest NVIDIA SDK (jetpack 4.4.1) is already out
>> of date because it require an old gcc version or gcc is moving too fast for such
>> sdk.
> 
> I have tested this against GCC 8 and Buildroot 2020.08.x:
> 
> BR2_TOOLCHAIN_BUILDROOT=y
> BR2_TOOLCHAIN_BUILDROOT_CXX=y
> BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
> BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
> BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
> BR2_BINUTILS_VERSION_2_32_X=y
> BR2_GCC_VERSION_8_X=y
> BR2_USE_MMU=y
> 
> ... and all works fine. Why do you think it won't work with GCC 8?

Cuda libraries requires a specific gcc version, see the tegra-demo-distro layer
[1]. I guess nivida only use gcc 7 (or maybe gcc 8) because they are using
ubuntu on this platform.

Also, the kernel you use come from github OE4T [2] where ~20 kernel patches have
been backported to fix gcc >= 8 issues. But this is not really the kernel from
Nvidia SDK.

I understand that the nvidia sdk is difficult to package into Buildroot or
Yocto. My review is absolutely not a no-go for merging this BSP. You did a great
job since you're able to use it with Buildroot :)

[1]
https://github.com/OE4T/tegra-demo-distro/blob/master/layers/meta-tegrademo/conf/distro/tegrademo.conf#L58
[2] https://github.com/OE4T/linux-tegra-4.9

Best regards,
Romain

> 
> Best,
> Christian
>
Graham Leva Nov. 24, 2020, 4:30 p.m. UTC | #8
Hi Christian,

I wanted to follow up here with some feedback. I work for NVIDIA and have
been working on similar things as you, integrating the Jetson line of
boards into Buildroot. Usual disclaimer, all opinions expressed here are my
own and do not reflect those of my employer. I'm very excited to see others
working on this though!

I don't want to derail this discussion in any way, but I thought I might be
able to help some. A couple of issues I noticed:

1. Package name -- this should be "tegra210" or "tegra210-linux" -- this
package is for the BSP (Board Support Package), not Linux4Tegra, the custom
kernel NVIDIA provides for its Tegra line of chips. Other boards require
different BSPs, and still use Linux4Tegra for the kernel.

2. Root permissions -- you can remove the root permissions requirement by
not using the flash.sh script NVIDIA provides. It's really a high-level
wrapper around other scripts and image signing tools that require root.
This should eliminate the need for your custom patch
(0001-Adjust-flash.sh-for-flashing-Buildroot-produced-disk.patch). Happy to
work with you on this. The way NVIDIA flashes boards and defines the
partition layout through xml files and parsing can be difficult to
translate to genimage or another Buildroot-compatibile tool. The way I've
gone here was to define a layout based on the output from NVIDIA's scripts,
and then target different layouts based on the board configuration
parameters. This is more work up-front and requires some thinking about how
each of the boards can be structured within Buildroot, but I think the
flexibility (and not having root permissions) outweighs. I personally find
having a genimage.cfg much clearer than the XML files for referencing
partition layouts too.

3. BSP software (everything under nv_tegra directory) -- this is a tough
issue. Ideally, I would like to see NVIDIA offer some static download URLs
for each of these pieces of software so we could create them as individual
Buildroot packages, rather than just installed altogether as part of the
BSP. I think this would be more in line with Buildroot's approach towards
building minimal firmware with only the packages you need. I understand if
this works for your use case, but there's a lot of system setup also
included in this directory (nv_tegra/config.tbz2) that has implications on
the Buildroot port and currently assumes you're building an Ubuntu-based
system. How to handle udev configuration, for example -- I would suggest
copying configurations over should be opt-in based on whether the user has
selected BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_EUDEV=y.

> Cuda libraries requires a specific gcc version, see the tegra-demo-distro
layer
> [1]. I guess nivida only use gcc 7 (or maybe gcc 8) because they are using
> ubuntu on this platform.
>
> Also, the kernel you use come from github OE4T [2] where ~20 kernel
patches have
> been backported to fix gcc >= 8 issues. But this is not really the kernel
from
> Nvidia SDK.

Romain is correct about the Linux4Tegra kernel here. I have a patch (really
a series) I started to submit to add this in to Buildroot (see:
http://buildroot-busybox.2317881.n4.nabble.com/PATCH-0-1-package-linux-nvidia-for-Jetson-Nano-SD-td269064.html#a269065),
and hopefully you can build on it. L4T should be able to work and compile
fine with GCC 8 or 9, but the kernel compilation currently breaks with 10.x.

Kind regards,
Graham Leva

On Tue, Nov 24, 2020 at 8:52 AM Romain Naour <romain.naour@smile.fr> wrote:

> Hello Christian,
>
> Le 24/11/2020 à 00:07, Christian Stewart a écrit :
> > Hi Romain,
> >
> > On Thu, Nov 19, 2020 at 5:40 AM Romain Naour <romain.naour@smile.fr>
> wrote:
> >>> +# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package
> Toolchain"
> >>> +BR2_TOOLCHAIN_BUILDROOT=y
> >>> +BR2_TOOLCHAIN_BUILDROOT_CXX=y
> >>> +BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
> >>> +BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
> >>> +BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
> >>> +BR2_BINUTILS_VERSION_2_32_X=y
> >>> +BR2_GCC_VERSION_7_X=y
> >>
> >> This means that you are not working on Buildroot master because gcc 7
> has been
> >> removed already.
> >>
> >> This is anoying... either the latest NVIDIA SDK (jetpack 4.4.1) is
> already out
> >> of date because it require an old gcc version or gcc is moving too fast
> for such
> >> sdk.
> >
> > I have tested this against GCC 8 and Buildroot 2020.08.x:
> >
> > BR2_TOOLCHAIN_BUILDROOT=y
> > BR2_TOOLCHAIN_BUILDROOT_CXX=y
> > BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
> > BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
> > BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
> > BR2_BINUTILS_VERSION_2_32_X=y
> > BR2_GCC_VERSION_8_X=y
> > BR2_USE_MMU=y
> >
> > ... and all works fine. Why do you think it won't work with GCC 8?
>
> Cuda libraries requires a specific gcc version, see the tegra-demo-distro
> layer
> [1]. I guess nivida only use gcc 7 (or maybe gcc 8) because they are using
> ubuntu on this platform.
>
> Also, the kernel you use come from github OE4T [2] where ~20 kernel
> patches have
> been backported to fix gcc >= 8 issues. But this is not really the kernel
> from
> Nvidia SDK.
>
> I understand that the nvidia sdk is difficult to package into Buildroot or
> Yocto. My review is absolutely not a no-go for merging this BSP. You did a
> great
> job since you're able to use it with Buildroot :)
>
> [1]
>
> https://github.com/OE4T/tegra-demo-distro/blob/master/layers/meta-tegrademo/conf/distro/tegrademo.conf#L58
> [2] https://github.com/OE4T/linux-tegra-4.9
>
> Best regards,
> Romain
>
> >
> > Best,
> > Christian
> >
>
> _______________________________________________
> buildroot mailing list
> buildroot@busybox.net
> http://lists.busybox.net/mailman/listinfo/buildroot
>
Christian Stewart Nov. 25, 2020, 1:53 a.m. UTC | #9
Hi Graham,

On Tue, Nov 24, 2020 at 8:30 AM Graham Leva <celaxodon@gmail.com> wrote:
> I wanted to follow up here with some feedback. I work for NVIDIA and have been working on similar things as you, integrating the Jetson line of boards into Buildroot. Usual disclaimer, all opinions expressed here are my own and do not reflect those of my employer. I'm very excited to see others working on this though!

Glad to hear there is interest in supporting this effort in Buildroot
from NVIDIA.

If you have a series you've written for L4T that provides LibGL
properly, it'd be quite helpful if that could be merged into this
and/or if we could use your series instead.

My primary goal is to provide support for the Jetson TX2 and Nano that
works as reliably as possible while not falling too far behind on
compiler and/or glibc versions.

> I don't want to derail this discussion in any way, but I thought I might be able to help some. A couple of issues I noticed:
>
> 1. Package name -- this should be "tegra210" or "tegra210-linux" -- this package is for the BSP (Board Support Package), not Linux4Tegra, the custom kernel NVIDIA provides for its Tegra line of chips. Other boards require different BSPs, and still use Linux4Tegra for the kernel.

The download for this package comes from this page -

https://developer.nvidia.com/embedded/linux-tegra-archive

Which is named "linux-tegra-archive" - "L4T Archive" - "NVIDIA L4T is
the board support package for Jetson."

The branding is pretty explicit that this board support package is
named "linux4tegra" which includes the kernel as part of the bundle.
This is why I chose this package name for the series.

The package supports fetching multiple variants, either the t210 or t186ref.

Given the large quantity of license files and hashes in this package,
it would be (IMO) best to maintain a single package for all of the
board variants, if possible. The contents of the different versions of
l4t are nearly identical and are processed identically.

If it is indeed necessary / better to duplicate the package several
times and make one per variant, I can make that change + the naming
changes.

> 2. Root permissions -- you can remove the root permissions requirement by not using the flash.sh script NVIDIA provides. It's really a high-level wrapper around other scripts and image signing tools that require root. This should eliminate the need for your custom patch (0001-Adjust-flash.sh-for-flashing-Buildroot-produced-disk.patch). Happy to work with you on this.

Actually, the purpose of the patch is not to remove the requirement
for root. It's to prevent the script from generating a new ext4 rootfs
image. This step is unnecessary, Buildroot does it already.

Instead what happens, with the patch applied, is that the script will
flash the buildroot produced ext4 partition image directly to the
target emmc device, rather than building a ext4 image from scratch
unnecessarily.

I use this to flash the emmc on the TX2 over USB (as mentioned in the
readme.txt).

> The way NVIDIA flashes boards and defines the partition layout through xml files and parsing can be difficult to translate to genimage or another Buildroot-compatibile tool.

Indeed but this is a separate issue than the flash.sh patch. I believe
flash.sh can still produce rootless images anyway even with the patch
applied.

> The way I've gone here was to define a layout based on the output from NVIDIA's scripts, and then target different layouts based on the board configuration parameters.
> This is more work up-front and requires some thinking about how each of the boards can be structured within Buildroot, but I think the flexibility (and not having root permissions) outweighs. I personally find having a genimage.cfg much clearer than the XML files for referencing partition layouts too.

This is a lot of extra effort that I don't think I would be willing to
maintain (can't speak for the other developers).

At the moment for practical purposes the flash.sh script works
perfectly to flash the device over USB and/or produce the device
images. Is there any reason why we shouldn't use that script?

If a custom partition layout is possible it would be quite good.

> 3. BSP software (everything under nv_tegra directory) -- this is a tough issue. Ideally, I would like to see NVIDIA offer some static download URLs for each of these pieces of software so we could create them as individual Buildroot packages, rather than just installed altogether as part of the BSP. I think this would be more in line with Buildroot's approach towards building minimal firmware with only the packages you need. I understand if this works for your use case, but there's a lot of system setup also included in this directory (nv_tegra/config.tbz2) that has implications on the Buildroot port and currently assumes you're building an Ubuntu-based system. How to handle udev configuration, for example -- I would suggest copying configurations over should be opt-in based on whether the user has selected BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_EUDEV=y.

Well, it's of course possible to filter which files we copy based on
these conditions, even without splitting this into separate packages.
I wonder though if (for an initial version in the tree) there's any
harm in installing udev rules when udev is not enabled? This won't
have any effect, right? So it shouldn't be a showstopper for v0.

> > Cuda libraries requires a specific gcc version, see the tegra-demo-distro layer
> > [1]. I guess nivida only use gcc 7 (or maybe gcc 8) because they are using
> > ubuntu on this platform.
> >
> > Also, the kernel you use come from github OE4T [2] where ~20 kernel patches have
> > been backported to fix gcc >= 8 issues. But this is not really the kernel from
> > Nvidia SDK.

It works with GCC 8 but I am seeing some errors loading the tegra_xusb firmware.

> Romain is correct about the Linux4Tegra kernel here. I have a patch (really a series) I started to submit to add this in to Buildroot (see: http://buildroot-busybox.2317881.n4.nabble.com/PATCH-0-1-package-linux-nvidia-for-Jetson-Nano-SD-td269064.html#a269065), and hopefully you can build on it. L4T should be able to work and compile fine with GCC 8 or 9, but the kernel compilation currently breaks with 10.x.

That patch, as far as I can see, just downloads l4t from a git source
via package-generic?

Which parts should I take from there for this series?

Thanks again for the support on this.

Best regards,
Christian Stewart
Graham Leva Nov. 26, 2020, 4:45 p.m. UTC | #10
Christian,

> Glad to hear there is interest in supporting this effort in Buildroot
> from NVIDIA.

There's really not any official support from NVIDIA at this time,
unfortunately,
just me working in my spare time right now. But I'm happy to help to
see the Jetson series be more successful in Buildroot :)

> My primary goal is to provide support for the Jetson TX2 and Nano that
> works as reliably as possible while not falling too far behind on
> compiler and/or glibc versions.

This is a great goal! I would very much like to see this happen too.

> If you have a series you've written for L4T that provides LibGL
> properly, it'd be quite helpful if that could be merged into this
> and/or if we could use your series instead.

I wasn't aware there was an issue with LibGL and Buildroot. Feel free to
reach out to me through email to discuss this in more detail.

> The download for this package comes from this page -
>
> https://developer.nvidia.com/embedded/linux-tegra-archive
<https://developer.nvidia.com/embedded/linux-tegra-archive>
> ...
> The branding is pretty explicit that this board support package is
> named "linux4tegra" which includes the kernel as part of the bundle.
> This is why I chose this package name for the series.
>
> The package supports fetching multiple variants, either the t210 or
t186ref.

Yeah, I can definitely see how you would come to that conclusion based on
the
website and marketing. I think maybe this is more of an internal
distinction than
external and not a big issue.

> Given the large quantity of license files and hashes in this package,
> it would be (IMO) best to maintain a single package for all of the
> board variants, if possible. The contents of the different versions of
> l4t are nearly identical and are processed identically.

I agree with you that this is a major issue with using the BSP for
everything.
The way NVIDIA approaches firmware for the Jetson line is philosophically
quite different from Buildroot in that it's made for building an
Ubuntu-based system
with package manager and all batteries included. Fortunately, NVIDIA
provides separate git repositories for many of the BSP firmware components
that can be selected and built conditionally, which I think will allow an
approach to firmware generation more compatible with Buildroot. The biggest
issue I see is that each piece of software NVIDIA is not yet packaged
individually
yet (cuda-base, cuda, libargus, etc), making handling of the BSP in
Buildroot
problematic.

> This is a lot of extra effort that I don't think I would be willing to
> maintain (can't speak for the other developers).
>
> At the moment for practical purposes the flash.sh script works
> perfectly to flash the device over USB and/or produce the device
> images. Is there any reason why we shouldn't use that script?

I don't mean to suggest there's anything wrong with the flash.sh script —
there's not, and you should use it if it works for you — it just takes a
different approach to building firmware.

Personally, I quite like how Buildroot board configs are responsible for
generating image artifacts, leaving image generation and flashing
to utilities. I'd like to see any NVIDIA board configs follow this
approach if they were part of Buildroot. And yes, it is more work
this way, but like I said before, I'm happy to help if I can :).

> Well, it's of course possible to filter which files we copy based on
> these conditions, even without splitting this into separate packages.
> I wonder though if (for an initial version in the tree) there's any
> harm in installing udev rules when udev is not enabled? This won't
> have any effect, right? So it shouldn't be a showstopper for v0.

As far as I know, there wouldn't be any harm. I think it's
just a matter of preference for not installing things your system doesn't
need. I haven't gone through all of the configurations yet though, so I
can't
speak to the rest.

> That patch, as far as I can see, just downloads l4t from a git source
> via package-generic?

> Which parts should I take from there for this series?

Yes, it's just a single patch there. I will probably resubmit as a full
series
of patches as it makes little sense by itself. This branch includes a
squashed
version of all kernel, bootloader, and firmware packages for the 4GB nano SD
(p3450-0000) that will be broken up for the patch series:
https://github.com/celaxodon/buildroot/tree/board/jetson-nano-squashed

You can build everything with `make jetson_nano_defconfig`. Note that
it's a little behind master, still uses GCC 7, and will require building
different
NVIDIA hardware packages (see package/hardware-nvidia/Config.in)
based on the target board. I can direct you to the patch series after I've
carved
each out. To combine this work with yours, you could probably apply the
single commit from my squashed branch and deactivate my BSP package,
"tegra210-linux", substituting your "linux4tegra" package.

Kind regards,
Graham Leva

On Tue, Nov 24, 2020 at 7:53 PM Christian Stewart <christian@paral.in>
wrote:

> Hi Graham,
>
> On Tue, Nov 24, 2020 at 8:30 AM Graham Leva <celaxodon@gmail.com> wrote:
> > I wanted to follow up here with some feedback. I work for NVIDIA and
> have been working on similar things as you, integrating the Jetson line of
> boards into Buildroot. Usual disclaimer, all opinions expressed here are my
> own and do not reflect those of my employer. I'm very excited to see others
> working on this though!
>
> Glad to hear there is interest in supporting this effort in Buildroot
> from NVIDIA.
>
> If you have a series you've written for L4T that provides LibGL
> properly, it'd be quite helpful if that could be merged into this
> and/or if we could use your series instead.
>
> My primary goal is to provide support for the Jetson TX2 and Nano that
> works as reliably as possible while not falling too far behind on
> compiler and/or glibc versions.
>
> > I don't want to derail this discussion in any way, but I thought I might
> be able to help some. A couple of issues I noticed:
> >
> > 1. Package name -- this should be "tegra210" or "tegra210-linux" -- this
> package is for the BSP (Board Support Package), not Linux4Tegra, the custom
> kernel NVIDIA provides for its Tegra line of chips. Other boards require
> different BSPs, and still use Linux4Tegra for the kernel.
>
> The download for this package comes from this page -
>
> https://developer.nvidia.com/embedded/linux-tegra-archive
>
> Which is named "linux-tegra-archive" - "L4T Archive" - "NVIDIA L4T is
> the board support package for Jetson."
>
> The branding is pretty explicit that this board support package is
> named "linux4tegra" which includes the kernel as part of the bundle.
> This is why I chose this package name for the series.
>
> The package supports fetching multiple variants, either the t210 or
> t186ref.
>
> Given the large quantity of license files and hashes in this package,
> it would be (IMO) best to maintain a single package for all of the
> board variants, if possible. The contents of the different versions of
> l4t are nearly identical and are processed identically.
>
> If it is indeed necessary / better to duplicate the package several
> times and make one per variant, I can make that change + the naming
> changes.
>
> > 2. Root permissions -- you can remove the root permissions requirement
> by not using the flash.sh script NVIDIA provides. It's really a high-level
> wrapper around other scripts and image signing tools that require root.
> This should eliminate the need for your custom patch
> (0001-Adjust-flash.sh-for-flashing-Buildroot-produced-disk.patch). Happy to
> work with you on this.
>
> Actually, the purpose of the patch is not to remove the requirement
> for root. It's to prevent the script from generating a new ext4 rootfs
> image. This step is unnecessary, Buildroot does it already.
>
> Instead what happens, with the patch applied, is that the script will
> flash the buildroot produced ext4 partition image directly to the
> target emmc device, rather than building a ext4 image from scratch
> unnecessarily.
>
> I use this to flash the emmc on the TX2 over USB (as mentioned in the
> readme.txt).
>
> > The way NVIDIA flashes boards and defines the partition layout through
> xml files and parsing can be difficult to translate to genimage or another
> Buildroot-compatibile tool.
>
> Indeed but this is a separate issue than the flash.sh patch. I believe
> flash.sh can still produce rootless images anyway even with the patch
> applied.
>
> > The way I've gone here was to define a layout based on the output from
> NVIDIA's scripts, and then target different layouts based on the board
> configuration parameters.
> > This is more work up-front and requires some thinking about how each of
> the boards can be structured within Buildroot, but I think the flexibility
> (and not having root permissions) outweighs. I personally find having a
> genimage.cfg much clearer than the XML files for referencing partition
> layouts too.
>
> This is a lot of extra effort that I don't think I would be willing to
> maintain (can't speak for the other developers).
>
> At the moment for practical purposes the flash.sh script works
> perfectly to flash the device over USB and/or produce the device
> images. Is there any reason why we shouldn't use that script?
>
> If a custom partition layout is possible it would be quite good.
>
> > 3. BSP software (everything under nv_tegra directory) -- this is a tough
> issue. Ideally, I would like to see NVIDIA offer some static download URLs
> for each of these pieces of software so we could create them as individual
> Buildroot packages, rather than just installed altogether as part of the
> BSP. I think this would be more in line with Buildroot's approach towards
> building minimal firmware with only the packages you need. I understand if
> this works for your use case, but there's a lot of system setup also
> included in this directory (nv_tegra/config.tbz2) that has implications on
> the Buildroot port and currently assumes you're building an Ubuntu-based
> system. How to handle udev configuration, for example -- I would suggest
> copying configurations over should be opt-in based on whether the user has
> selected BR2_ROOTFS_DEVICE_CREATION_DYNAMIC_EUDEV=y.
>
> Well, it's of course possible to filter which files we copy based on
> these conditions, even without splitting this into separate packages.
> I wonder though if (for an initial version in the tree) there's any
> harm in installing udev rules when udev is not enabled? This won't
> have any effect, right? So it shouldn't be a showstopper for v0.
>
> > > Cuda libraries requires a specific gcc version, see the
> tegra-demo-distro layer
> > > [1]. I guess nivida only use gcc 7 (or maybe gcc 8) because they are
> using
> > > ubuntu on this platform.
> > >
> > > Also, the kernel you use come from github OE4T [2] where ~20 kernel
> patches have
> > > been backported to fix gcc >= 8 issues. But this is not really the
> kernel from
> > > Nvidia SDK.
>
> It works with GCC 8 but I am seeing some errors loading the tegra_xusb
> firmware.
>
> > Romain is correct about the Linux4Tegra kernel here. I have a patch
> (really a series) I started to submit to add this in to Buildroot (see:
> http://buildroot-busybox.2317881.n4.nabble.com/PATCH-0-1-package-linux-nvidia-for-Jetson-Nano-SD-td269064.html#a269065),
> and hopefully you can build on it. L4T should be able to work and compile
> fine with GCC 8 or 9, but the kernel compilation currently breaks with 10.x.
>
> That patch, as far as I can see, just downloads l4t from a git source
> via package-generic?
>
> Which parts should I take from there for this series?
>
> Thanks again for the support on this.
>
> Best regards,
> Christian Stewart
>
diff mbox series

Patch

diff --git a/board/jetson/tx2/readme.txt b/board/jetson/tx2/readme.txt
new file mode 100644
index 0000000000..45fdb50a56
--- /dev/null
+++ b/board/jetson/tx2/readme.txt
@@ -0,0 +1,83 @@ 
+NVIDIA Jetson TX2
+
+Intro
+=====
+
+This configuration supports the Jetson TX2 devkit.
+
+Building
+========
+
+Configure Buildroot
+-------------------
+
+For Jetson TX2:
+
+  $ make jetsontx2_defconfig
+
+Build the rootfs
+----------------
+
+You may now build your rootfs with:
+
+  $ make
+
+
+Flashing
+========
+
+Once the build process is finished you will have the target binaries in the
+output/images directory, with a copy of linux4tegra.
+
+Flashing to the internal eMMC is done by booting to the official recovery mode,
+and flashing the system from there. The default factory-flashed TX2 is suitable.
+
+There are a lot of cases where the TX2 will not boot properly unless all of the
+peripherals are fully disconnected, power is disconnected, everything fully
+resets, and then the power is introduced back again.
+
+The recovery mode of the Jetson is used to flash. Entering recovery:
+
+ - Start with the machine powered off + fully unplugged.
+ - Plug in the device to power, and connect a HDMI display.
+ - Connect a micro-USB cable from the host PC to the target board.
+ - Power on the device by holding the start button until the red light is lit.
+ - Hold down the RST button and REC button simultaneously.
+ - Release the RST button while holding down the REC button.
+ - Wait a few seconds, then release the REC button.
+
+To flash over USB:
+
+```
+cd output/images/linux4tegra
+sudo bash ./flash.sh \
+     -I ../rootfs.ext2 \
+     -K ../Image \
+     -L ../u-boot-dtb.bin \
+     -C "ramdisk_size=100000 net.ifnames=0 elevator=deadline" \
+     -d ../tegra186-quill-p3310-1000-c03-00-base.dtb \
+     jetson-tx2-devkit mmcblk0p1
+```
+
+This will run the `flash.sh` script from L4T, and will setup the kernel, u-boot,
+persist + boot-up partition mmcblk0p1. This may overwrite your existing work so
+use it for initial setup only.
+
+Bootup Process
+==============
+
+The TX2 boots from the internal eMMC, at mmcblk0p1.
+
+A "secure boot" process is used, with multiple bootloaders:
+
+ - BootROM -> MB1 (TrustZone)
+ - MB2/BPMP -> (Non-Trustzone)
+ - Cboot (uses Little Kernel)
+ - Uboot
+ - Kernel
+ 
+Uboot is flashed to the mmcblk0p1 emmc partition.
+
+Cboot could be compiled from source, and the source is available from the
+official sources, however, we do not (yet) compile cboot.
+
diff --git a/board/jetsontx2 b/board/jetsontx2
new file mode 120000
index 0000000000..7404114cc3
--- /dev/null
+++ b/board/jetsontx2
@@ -0,0 +1 @@ 
+./jetson/tx2
\ No newline at end of file
diff --git a/configs/jetsontx2_defconfig b/configs/jetsontx2_defconfig
new file mode 100644
index 0000000000..5ca832524e
--- /dev/null
+++ b/configs/jetsontx2_defconfig
@@ -0,0 +1,64 @@ 
+BR2_aarch64=y
+BR2_cortex_a57=y
+BR2_ARM_FPU_FP_ARMV8=y
+
+# enable specific optimizations
+BR2_TARGET_OPTIMIZATION="-march=armv8-a+crypto -mcpu=cortex-a57+crypto"
+
+# Toolchain reference: docs.nvidia.com: "Jetson Linux Driver Package Toolchain"
+BR2_TOOLCHAIN_BUILDROOT=y
+BR2_TOOLCHAIN_BUILDROOT_CXX=y
+BR2_TOOLCHAIN_BUILDROOT_GLIBC=y
+BR2_TOOLCHAIN_BUILDROOT_WCHAR=y
+BR2_TOOLCHAIN_BUILDROOT_LOCALE=y
+BR2_BINUTILS_VERSION_2_32_X=y
+BR2_GCC_VERSION_7_X=y
+BR2_GCC_ENABLE_LTO=n
+BR2_USE_MMU=y
+
+BR2_SYSTEM_DHCP="eth0"
+
+# Linux headers same as kernel, a 4.9 series
+BR2_PACKAGE_HOST_LINUX_HEADERS_CUSTOM_4_9=y
+BR2_KERNEL_HEADERS_AS_KERNEL=y
+
+BR2_LINUX_KERNEL=y
+BR2_LINUX_KERNEL_CUSTOM_TARBALL=y
+# patches-l4t-r32.4
+BR2_LINUX_KERNEL_CUSTOM_TARBALL_LOCATION="$(call github,madisongh,linux-tegra-4.9,0be1a57448010ae60505acf4e2153638455cee7c)/linux-tegra-4.9.140-r1.tar.gz"
+BR2_LINUX_KERNEL_DEFCONFIG="tegra"
+
+# Build the DTB from the kernel sources
+BR2_LINUX_KERNEL_DTS_SUPPORT=y
+BR2_LINUX_KERNEL_INTREE_DTS_NAME="_ddot_/_ddot_/_ddot_/_ddot_/nvidia/platform/t18x/quill/kernel-dts/tegra186-quill-p3310-1000-c03-00-base"
+
+BR2_LINUX_KERNEL_NEEDS_HOST_OPENSSL=y
+
+BR2_PACKAGE_LINUX4TEGRA=y
+BR2_PACKAGE_LINUX4TEGRA_PLATFORM_T186REF=y
+
+# TODO: NVIDIA_CONTAINER_TOOLKIT requires a go-module integration.
+# BR2_PACKAGE_NVIDIA_CONTAINER_TOOLKIT=y
+
+BR2_PACKAGE_LINUX_FIRMWARE=y
+BR2_PACKAGE_LINUX_FIRMWARE_RTL_88XX=y
+
+# Required tools to create the image
+BR2_PACKAGE_HOST_DOSFSTOOLS=y
+BR2_PACKAGE_HOST_JQ=y
+BR2_PACKAGE_HOST_PARTED=y
+
+# Filesystem / image
+BR2_TARGET_ROOTFS_EXT2=y
+BR2_TARGET_ROOTFS_EXT2_4=y
+BR2_TARGET_ROOTFS_EXT2_SIZE="2000M"
+# BR2_TARGET_ROOTFS_TAR is not set
+
+# Uboot
+BR2_TARGET_UBOOT=y
+BR2_TARGET_UBOOT_BOARD_DEFCONFIG="p2771-0000-500"
+BR2_TARGET_UBOOT_BUILD_SYSTEM_KCONFIG=y
+BR2_TARGET_UBOOT_CUSTOM_TARBALL=y
+BR2_TARGET_UBOOT_CUSTOM_TARBALL_LOCATION="$(call github,paralin,u-boot-tegra,e6da093be3cc593ef4294e1922b3391ede9c94da)/u-boot-tegra-l4t-r32.4-v2016.7.tar.gz"
+BR2_TARGET_UBOOT_FORMAT_DTB_BIN=y
+BR2_TARGET_UBOOT_NEEDS_DTC=y