mbox

[RFC,v8,00/20] Unifying LKL into UML

Message ID cover.1611103406.git.thehajime@gmail.com
State Not Applicable
Headers show

Pull-request

git://github.com/thehajime/linux 043677211bb5562397c511911e7861c3e217611b

Message

Hajime Tazaki Jan. 20, 2021, 2:27 a.m. UTC
This is another spin of the unification of LKL into UML.  Updated and
fixed comments from our v7 patches.  The summary is listed in the
changelog below.

Note that the whole patchset requires the patch "um: ubd: fix command
line handling of ubd" to be correctly tested, which I previously
submitted.


Changes in rfc v8:
- stop using the term "nommu" mode, "library" mode instead
- generic atomic64 rewrite to use gcc builtin atomics
- use kernel-doc format for comments
- stop using a collection of function pointer (struct lkl_host_operations),
  but weak functions
- use python3, not python
- use c99 initializers for strerror
- add comments/descriptions about thread implementation
- remove redundant Kconfig entries (RAID6_PQ_BENCHMARK, STACKTRACE_SUPPORT,
  etc)
- refine code comments
- stop using NPROC for the internal -j flag
- remove make argument of "UMMODE=library"
- preserve the vmlinux file as it is
- rework commit history with reasonable files
- rework test framework using kselftest (tools/testing/selftests/um/)
- add documentation, MAINTAINERS


Changes in rfc v7:
- preserve `make ARCH=um` syntax to build UML
- introduce `make ARCH=um UMMODE=library` to build library mode
- fix undefined symbols issue during modpost
- clean up makefiles (arch/um, tools/um)

Changes in rfc v6:
- rebase with the current linus tree

Changes in rfc v5:
- rewrite whole patchset from scratch
- move arch-dependent code of arch/um and arch/x86/um to tools/um
 - mainly code under os-Linux/ involved
 - introduce 2-stage build (kernel, and host-dependent parts)
 - clean up vmlinux.lds.S
- put LKL-specific implementations as a SUBARCH under arch/um/nommu
- introduce !CONFIG_MMU in arch/um
- use struct arch_thread and arch_switch_to() for subarch-specific
  thread implementation
- integrate with the IRQ infrastructure of UML
- tested with block device drivers (ubd) for the proof

Changes in rfc v4: (https://lwn.net/Articles/816276/)
- Rebase on the current uml/master branch
- Fix IRQ handling (bug fix)
- drop a patch for CONFIG_GENERIC_ATOMIC64 (comment by Peter Zijlstra)
- implement vector net driver for UMMODE_LIB (comment by Anton)
- clean up uapi headers to avoid duplicates
- clean up IRQ handling code (comment by Anton)
- fix error handling in test code (comments by David Disseldorp)

Changes in rfc v3:
- use UML drivers (net, block) from LKL programs
- drop virtio device implementations
- drop mingw32 (Windows) host
- drop android (arm/aarch64) host
- drop FreeBSD (x86_64) host
- drop LD_PRELOAD (hijack) support
- update milestone

rfc v2:
- use UMMODE instead of SUBARCH to switch UML or LKL
- tools/lkl directory is still there. I confirmed we can move under arch/um
  (e.g., arch/um/lkl/hosts).  I will move it IF this is preferable.
- drop several patches involved non-uml directory
- drop several patches which are not required
- refine commit logs
- document updated




LKL (Linux Kernel Library) is aiming to allow reusing the Linux kernel code
as extensively as possible with minimal effort and reduced maintenance
overhead.

Examples of how LKL can be used are: creating userspace applications
(running on Linux and other operating systems) that can read or write Linux
filesystems or can use the Linux networking stack, creating kernel drivers
for other operating systems that can read Linux filesystems, bootloaders
support for reading/writing Linux filesystems, etc.

With LKL, the kernel code is compiled into an object file that can be
directly linked by applications. The API offered by LKL is based on the
Linux system call interface.

LKL is originally implemented as an architecture port in arch/lkl, but this
series of commits tries to integrate this into arch/um as one of the mode
of UML.  This was discussed during RFC email of LKL (*1).

The latest LKL version can be found at https://github.com/lkl/linux

Milestone
=========
This patches is a first step toward upstreaming *library mode* of
Linux kernel, but we think we need to have several steps toward our
goal, describing in the below.


Milestone 1: LKL lib on top of UML
 * Kernel - Host build split
 -  Build UML as a relocatable object using the UML's kernel linker script.
 -  Move the ptrace and other well isolated os code out of arch/um to
    tools/um
 -  Use standard host toolchain to create a static library stripped of
    the ptrace code. Use standard host toolchain to build the main UML
    executable.
 -  Add library init API that creates the UML kernel process and starts
    UML.
 * System calls APIs
 -  Add new system call interface based on UML's irq facility.
 -  Use the LKL scripts to export the required headers to create system
    calls APIs that use the UML system calls infrastructure.
 -  Keep the underlying host and driver operations (threads, irqs, etc.)
    as they are now in UML.
 * Boot test
 -  Port the LKL boot test to verify that we are able to programatically
    issue system calls.

Milestone 2: add virtio disk support
 * Export asm/io.h operations to host/os. Create IO access operations
   and redirect them to weak os_ variants that use the current UML
   implementation.
 * Add the LKL IO access layer including generic virtio handling and the
   virtio block device code.
 * Port LKL disk test and disk apps (lklfuse, fs2tar, cptofs)

Milestone 3: new arch ports
  * Abstract the system call / IRQ mode the move the implementation to host
  * Abstract the thread model and move the implementation to host
  * Add LKL thread model and LKL ports


Building LKL the host library and LKL applications
==================================================

% make ARCH=um SUBARCH=lkl defconfig
% make ARCH=um SUBARCH=lkl

will build LKL as a object file, it will install it in
tools/um/libtogether with the headers files in tools/um/include then
will build the host library, tests and a few of application examples:

* tools/testing/selftests/um/boot.c - a simple applications that uses
  LKL and exercises the basic LKL APIs

* tools/testing/selftests/um/disk.c - a simple applications that tests
  LKL and exercises the basic filesystem-related LKL APIs

Those tests can run with the following kselftest command:

    $ make ARCH=um SUBARCH=lkl TARGETS="um" kselftest

Supported hosts
===============

Currently LKL supports Linux userspace applications. New hosts can be added
relatively easy if the host supports gcc and GNU ld. Previous versions of
LKL supported Windows kernel and Haiku kernel hosts, and we also have WIP
patches with rump-hypercall interface, used in UEFI, as well as macOS
userspace (part of POSIX).

There is also musl-libc port for LKL, which might be interested in for some
folks.


Further readings about LKL
=========================

- Discussion in github LKL issue
https://github.com/lkl/linux/issues/304

- LKL (an article)
https://www.researchgate.net/profile/Nicolae_Tapus2/publication/224164682_LKL_The_Linux_kernel_library/links/02bfe50fd921ab4f7c000000.pdf

*1 RFC email to LKML (back in 2015)
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1012277.html


Please review the following changes for suitability for inclusion. If you have
any objections or suggestions for improvement, please respond to the patches. If
you agree with the changes, please provide your Acked-by.

The following changes since commit 19c329f6808995b142b3966301f217c831e7cf31:

  Linux 5.11-rc4 (2021-01-17 16:37:05 -0800)

are available in the Git repository at:

  git://github.com/thehajime/linux 043677211bb5562397c511911e7861c3e217611b
  https://github.com/thehajime/linux/tree/uml-lkl-5.11rc4-v8

Hajime Tazaki (18):
  um: move arch/um/os-Linux dir to tools/um/uml
  um: move arch/x86/um/os-Linux to tools/um/uml/
  um: extend arch_switch_to for alternate SUBARCH
  um: add UML library mode
  um: lkl: host interface
  um: lkl: memory handling
  um: lkl: kernel thread support
  um: lkl: system call interface and application API
  um: lkl: basic console support
  um: lkl: initialization and cleanup
  um: lkl: integrate with irq infrastructure of UML
  um: lkl: plug in the build system
  um: host: add library mode build for ARCH=um
  um: host: add utilities functions
  um: host: posix host operations
  selftests/um: lkl: add test programs for library mode of UML
  um: lkl: add block device support of UML
  um: lkl: add documentation

Octavian Purdila (2):
  um: split build in kernel and host parts
  um: implement os_initcalls and os_exitcalls

 Documentation/virt/uml/lkl.txt                |  48 ++
 MAINTAINERS                                   |  10 +
 arch/um/Kconfig                               |  37 +-
 arch/um/Makefile                              |  41 +-
 arch/um/configs/lkl_defconfig                 |  73 +++
 arch/um/drivers/Makefile                      |  10 +-
 .../um/{os-Linux => }/drivers/ethertap_kern.c |   0
 arch/um/{os-Linux => }/drivers/tuntap_kern.c  |   0
 arch/um/include/asm/common.lds.S              |  99 ----
 arch/um/include/asm/host_ops.h                |   9 +
 arch/um/include/asm/mmu.h                     |   3 +
 arch/um/include/asm/mmu_context.h             |  10 +
 arch/um/include/asm/page.h                    |  15 +
 arch/um/include/asm/pgtable.h                 |  27 +
 arch/um/include/asm/thread_info.h             |  24 +
 arch/um/include/asm/uaccess.h                 |   6 +
 arch/um/include/asm/xor.h                     |   3 +-
 arch/um/include/shared/as-layout.h            |   1 +
 .../drivers => include/shared}/etap.h         |   0
 arch/um/include/shared/init.h                 |  19 +-
 arch/um/include/shared/os.h                   |   1 +
 .../drivers => include/shared}/tuntap.h       |   0
 arch/um/kernel/Makefile                       |  13 +-
 arch/um/kernel/dyn.lds.S                      | 171 -------
 arch/um/kernel/irq.c                          |  13 +
 arch/um/kernel/process.c                      |  14 +-
 arch/um/kernel/reboot.c                       |   5 +
 arch/um/kernel/time.c                         |   2 +
 arch/um/kernel/um_arch.c                      |  16 +
 arch/um/kernel/uml.lds.S                      | 115 -----
 arch/um/{os-Linux => kernel}/user_syms.c      |   0
 arch/um/kernel/vmlinux.lds.S                  |  92 +++-
 arch/um/lkl/Makefile                          |   2 +
 arch/um/lkl/Makefile.um                       |  18 +
 arch/um/lkl/include/asm/Kbuild                |   7 +
 arch/um/lkl/include/asm/archparam.h           |   1 +
 arch/um/lkl/include/asm/atomic.h              |  11 +
 arch/um/lkl/include/asm/atomic64.h            |  91 ++++
 arch/um/lkl/include/asm/cpu.h                 |  15 +
 arch/um/lkl/include/asm/elf.h                 |  19 +
 arch/um/lkl/include/asm/mm_context.h          |   8 +
 arch/um/lkl/include/asm/processor.h           |  48 ++
 arch/um/lkl/include/asm/ptrace.h              |  21 +
 arch/um/lkl/include/asm/sched.h               |  23 +
 arch/um/lkl/include/asm/segment.h             |   9 +
 arch/um/lkl/include/asm/syscall_wrapper.h     |  57 +++
 arch/um/lkl/include/asm/syscalls.h            |  15 +
 arch/um/lkl/include/uapi/asm/Kbuild           |   6 +
 arch/um/lkl/include/uapi/asm/bitsperlong.h    |  16 +
 arch/um/lkl/include/uapi/asm/byteorder.h      |  13 +
 arch/um/lkl/include/uapi/asm/host_ops.h       | 280 +++++++++++
 arch/um/lkl/include/uapi/asm/sigcontext.h     |  12 +
 arch/um/lkl/include/uapi/asm/syscalls.h       | 301 ++++++++++++
 arch/um/lkl/include/uapi/asm/unistd.h         |  17 +
 arch/um/lkl/um/Kconfig                        |  21 +
 arch/um/lkl/um/Makefile                       |   4 +
 arch/um/lkl/um/bootmem.c                      | 107 ++++
 arch/um/lkl/um/console.c                      |  41 ++
 arch/um/lkl/um/cpu.c                          | 269 ++++++++++
 arch/um/lkl/um/delay.c                        |  31 ++
 arch/um/lkl/um/setup.c                        | 179 +++++++
 arch/um/lkl/um/shared/sysdep/archsetjmp.h     |  13 +
 arch/um/lkl/um/shared/sysdep/faultinfo.h      |   8 +
 arch/um/lkl/um/shared/sysdep/kernel-offsets.h |  12 +
 arch/um/lkl/um/shared/sysdep/mcontext.h       |   9 +
 arch/um/lkl/um/shared/sysdep/ptrace.h         |  42 ++
 arch/um/lkl/um/shared/sysdep/ptrace_user.h    |   7 +
 arch/um/lkl/um/syscalls.c                     | 193 ++++++++
 arch/um/lkl/um/threads.c                      | 261 ++++++++++
 arch/um/lkl/um/unimplemented.c                |  70 +++
 arch/um/lkl/um/user_constants.h               |  13 +
 arch/um/os-Linux/Makefile                     |  21 -
 arch/um/os-Linux/drivers/Makefile             |  13 -
 arch/um/scripts/headers_install.py            | 200 ++++++++
 arch/x86/um/Makefile                          |   2 +-
 arch/x86/um/os-Linux/Makefile                 |  13 -
 arch/x86/um/ptrace_32.c                       |   2 +-
 arch/x86/um/syscalls_64.c                     |   2 +-
 scripts/headers_install.sh                    |   6 +-
 scripts/link-vmlinux.sh                       |  42 +-
 tools/testing/selftests/Makefile              |   3 +
 tools/testing/selftests/um/Makefile           |  16 +
 tools/testing/selftests/um/boot.c             | 376 ++++++++++++++
 tools/testing/selftests/um/cla.c              | 159 ++++++
 tools/testing/selftests/um/cla.h              |  33 ++
 tools/testing/selftests/um/disk-ext4.sh       |   6 +
 tools/testing/selftests/um/disk-vfat.sh       |   6 +
 tools/testing/selftests/um/disk.c             | 166 +++++++
 tools/testing/selftests/um/disk.sh            |  67 +++
 tools/testing/selftests/um/test.c             | 128 +++++
 tools/testing/selftests/um/test.h             |  72 +++
 tools/testing/selftests/um/test.sh            | 181 +++++++
 tools/um/.gitignore                           |   1 +
 tools/um/Makefile                             |  76 +++
 tools/um/Targets                              |   9 +
 tools/um/include/lkl.h                        | 364 ++++++++++++++
 tools/um/include/lkl_host.h                   |  19 +
 tools/um/lib/Build                            |   7 +
 tools/um/lib/fs.c                             | 461 ++++++++++++++++++
 tools/um/lib/jmp_buf.c                        |  14 +
 tools/um/lib/posix-host.c                     | 273 +++++++++++
 tools/um/lib/utils.c                          | 207 ++++++++
 tools/um/uml/Build                            |  59 +++
 tools/um/uml/drivers/Build                    |  10 +
 .../um/uml}/drivers/ethertap_user.c           |   0
 .../um/uml}/drivers/tuntap_user.c             |   0
 {arch/um/os-Linux => tools/um/uml}/elf_aux.c  |   0
 {arch/um/os-Linux => tools/um/uml}/execvp.c   |   4 -
 {arch/um/os-Linux => tools/um/uml}/file.c     |   0
 {arch/um/os-Linux => tools/um/uml}/helper.c   |   0
 {arch/um/os-Linux => tools/um/uml}/irq.c      |   0
 tools/um/uml/lkl/Build                        |   1 +
 tools/um/uml/lkl/registers.c                  |  21 +
 tools/um/uml/lkl/unimplemented.c              |  21 +
 {arch/um/os-Linux => tools/um/uml}/main.c     |   0
 {arch/um/os-Linux => tools/um/uml}/mem.c      |   0
 {arch/um/os-Linux => tools/um/uml}/process.c  |   2 +
 .../um/os-Linux => tools/um/uml}/registers.c  |   0
 {arch/um/os-Linux => tools/um/uml}/sigio.c    |   0
 {arch/um/os-Linux => tools/um/uml}/signal.c   |  12 +-
 .../skas/Makefile => tools/um/uml/skas/Build  |   6 +-
 {arch/um/os-Linux => tools/um/uml}/skas/mem.c |   0
 .../os-Linux => tools/um/uml}/skas/process.c  |   3 +-
 {arch/um/os-Linux => tools/um/uml}/start_up.c |   0
 {arch/um/os-Linux => tools/um/uml}/time.c     |   0
 {arch/um/os-Linux => tools/um/uml}/tty.c      |   0
 {arch/um/os-Linux => tools/um/uml}/umid.c     |   0
 {arch/um/os-Linux => tools/um/uml}/util.c     |  26 +
 tools/um/uml/x86/Build                        |  11 +
 .../os-Linux => tools/um/uml/x86}/mcontext.c  |   0
 .../um/os-Linux => tools/um/uml/x86}/prctl.c  |   0
 .../os-Linux => tools/um/uml/x86}/registers.c |   0
 .../os-Linux => tools/um/uml/x86}/task_size.c |   0
 .../um/os-Linux => tools/um/uml/x86}/tls.c    |   0
 134 files changed, 5743 insertions(+), 525 deletions(-)
 create mode 100644 Documentation/virt/uml/lkl.txt
 create mode 100644 arch/um/configs/lkl_defconfig
 rename arch/um/{os-Linux => }/drivers/ethertap_kern.c (100%)
 rename arch/um/{os-Linux => }/drivers/tuntap_kern.c (100%)
 delete mode 100644 arch/um/include/asm/common.lds.S
 create mode 100644 arch/um/include/asm/host_ops.h
 rename arch/um/{os-Linux/drivers => include/shared}/etap.h (100%)
 rename arch/um/{os-Linux/drivers => include/shared}/tuntap.h (100%)
 delete mode 100644 arch/um/kernel/dyn.lds.S
 delete mode 100644 arch/um/kernel/uml.lds.S
 rename arch/um/{os-Linux => kernel}/user_syms.c (100%)
 create mode 100644 arch/um/lkl/Makefile
 create mode 100644 arch/um/lkl/Makefile.um
 create mode 100644 arch/um/lkl/include/asm/Kbuild
 create mode 100644 arch/um/lkl/include/asm/archparam.h
 create mode 100644 arch/um/lkl/include/asm/atomic.h
 create mode 100644 arch/um/lkl/include/asm/atomic64.h
 create mode 100644 arch/um/lkl/include/asm/cpu.h
 create mode 100644 arch/um/lkl/include/asm/elf.h
 create mode 100644 arch/um/lkl/include/asm/mm_context.h
 create mode 100644 arch/um/lkl/include/asm/processor.h
 create mode 100644 arch/um/lkl/include/asm/ptrace.h
 create mode 100644 arch/um/lkl/include/asm/sched.h
 create mode 100644 arch/um/lkl/include/asm/segment.h
 create mode 100644 arch/um/lkl/include/asm/syscall_wrapper.h
 create mode 100644 arch/um/lkl/include/asm/syscalls.h
 create mode 100644 arch/um/lkl/include/uapi/asm/Kbuild
 create mode 100644 arch/um/lkl/include/uapi/asm/bitsperlong.h
 create mode 100644 arch/um/lkl/include/uapi/asm/byteorder.h
 create mode 100644 arch/um/lkl/include/uapi/asm/host_ops.h
 create mode 100644 arch/um/lkl/include/uapi/asm/sigcontext.h
 create mode 100644 arch/um/lkl/include/uapi/asm/syscalls.h
 create mode 100644 arch/um/lkl/include/uapi/asm/unistd.h
 create mode 100644 arch/um/lkl/um/Kconfig
 create mode 100644 arch/um/lkl/um/Makefile
 create mode 100644 arch/um/lkl/um/bootmem.c
 create mode 100644 arch/um/lkl/um/console.c
 create mode 100644 arch/um/lkl/um/cpu.c
 create mode 100644 arch/um/lkl/um/delay.c
 create mode 100644 arch/um/lkl/um/setup.c
 create mode 100644 arch/um/lkl/um/shared/sysdep/archsetjmp.h
 create mode 100644 arch/um/lkl/um/shared/sysdep/faultinfo.h
 create mode 100644 arch/um/lkl/um/shared/sysdep/kernel-offsets.h
 create mode 100644 arch/um/lkl/um/shared/sysdep/mcontext.h
 create mode 100644 arch/um/lkl/um/shared/sysdep/ptrace.h
 create mode 100644 arch/um/lkl/um/shared/sysdep/ptrace_user.h
 create mode 100644 arch/um/lkl/um/syscalls.c
 create mode 100644 arch/um/lkl/um/threads.c
 create mode 100644 arch/um/lkl/um/unimplemented.c
 create mode 100644 arch/um/lkl/um/user_constants.h
 delete mode 100644 arch/um/os-Linux/Makefile
 delete mode 100644 arch/um/os-Linux/drivers/Makefile
 create mode 100755 arch/um/scripts/headers_install.py
 delete mode 100644 arch/x86/um/os-Linux/Makefile
 create mode 100644 tools/testing/selftests/um/Makefile
 create mode 100644 tools/testing/selftests/um/boot.c
 create mode 100644 tools/testing/selftests/um/cla.c
 create mode 100644 tools/testing/selftests/um/cla.h
 create mode 100755 tools/testing/selftests/um/disk-ext4.sh
 create mode 100755 tools/testing/selftests/um/disk-vfat.sh
 create mode 100644 tools/testing/selftests/um/disk.c
 create mode 100755 tools/testing/selftests/um/disk.sh
 create mode 100644 tools/testing/selftests/um/test.c
 create mode 100644 tools/testing/selftests/um/test.h
 create mode 100644 tools/testing/selftests/um/test.sh
 create mode 100644 tools/um/.gitignore
 create mode 100644 tools/um/Makefile
 create mode 100644 tools/um/Targets
 create mode 100644 tools/um/include/lkl.h
 create mode 100644 tools/um/include/lkl_host.h
 create mode 100644 tools/um/lib/Build
 create mode 100644 tools/um/lib/fs.c
 create mode 100644 tools/um/lib/jmp_buf.c
 create mode 100644 tools/um/lib/posix-host.c
 create mode 100644 tools/um/lib/utils.c
 create mode 100644 tools/um/uml/Build
 create mode 100644 tools/um/uml/drivers/Build
 rename {arch/um/os-Linux => tools/um/uml}/drivers/ethertap_user.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/drivers/tuntap_user.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/elf_aux.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/execvp.c (98%)
 rename {arch/um/os-Linux => tools/um/uml}/file.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/helper.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/irq.c (100%)
 create mode 100644 tools/um/uml/lkl/Build
 create mode 100644 tools/um/uml/lkl/registers.c
 create mode 100644 tools/um/uml/lkl/unimplemented.c
 rename {arch/um/os-Linux => tools/um/uml}/main.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/mem.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/process.c (99%)
 rename {arch/um/os-Linux => tools/um/uml}/registers.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/sigio.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/signal.c (96%)
 rename arch/um/os-Linux/skas/Makefile => tools/um/uml/skas/Build (56%)
 rename {arch/um/os-Linux => tools/um/uml}/skas/mem.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/skas/process.c (99%)
 rename {arch/um/os-Linux => tools/um/uml}/start_up.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/time.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/tty.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/umid.c (100%)
 rename {arch/um/os-Linux => tools/um/uml}/util.c (89%)
 create mode 100644 tools/um/uml/x86/Build
 rename {arch/x86/um/os-Linux => tools/um/uml/x86}/mcontext.c (100%)
 rename {arch/x86/um/os-Linux => tools/um/uml/x86}/prctl.c (100%)
 rename {arch/x86/um/os-Linux => tools/um/uml/x86}/registers.c (100%)
 rename {arch/x86/um/os-Linux => tools/um/uml/x86}/task_size.c (100%)
 rename {arch/x86/um/os-Linux => tools/um/uml/x86}/tls.c (100%)

Comments

Johannes Berg March 14, 2021, 9:03 p.m. UTC | #1
Hi,

So I'm still a bit lost here with this, and what exactly you're doing in
places.

For example, you simulate a single CPU ("depends on !SMP", and anyway
UML only supports that right now), yet on the other hand do a *LOT* of
extra work with lkl_sem, lkl_thread, lkl_mutex, and all that. It's not
clear to me why? Are you trying to model kernel threads as actual
userspace pthreads, but then run only one at a time by way of exclusive
locking?

I think we probably need a bit more architecture introduction here in
the cover letter or the documentation patch. The doc patch basically
just explains what it does, but not how it does anything, or why it was
done in this way.

For example, I'm asking myself:
 * Why NOMMU? UML doesn't really do _much_ with memory protection unless
   you add userspace, which you don't have.
 * Why pthreads and all? You already require jump_buf, so UML's
   switch_threads() ought to be just fine for scheduling? It almost
   seems like you're doing this just so you can serialize against "other
   threads" (application threads), but wouldn't that trivially be
   handled by the application? You could let it hook into switch_to() or
   something, but why should a single "LKL" CPU ever require multiple
   threads? Seems to me that the userspace could be required to
   "lkl_run()" or so (vs. lkl_start()). Heck, you could even exit
   lkl_run() every time you switch tasks in the kernel, and leave
   scheduling the kernel vs. the application entirely up to the
   application? (A trivial application would be simply doing something
   like "while (1) { lkl_run(); pause(); }" mimicking the idle loop of
   UML.

And - kind of the theme behind all these questions - why is this not
making UML actually be a binary that uses LKL? If the design were like
what I'm alluding to above, that should actually be possible? Why should
it not be possible? Why would it not be desirable? (I'm actually
thinking that might be really useful to some of the things I'm doing.)
Yes, if the application actually supports userspace running then it has
som limitations on what it can do (in particular wrt. signals etc.), but
that could be documented and would be OK?

johannes
Hajime Tazaki March 16, 2021, 1:17 a.m. UTC | #2
Hello,

First of all, thanks for all the comments to the patchset which has
been a bit stale.  I'll reply them.

On Mon, 15 Mar 2021 06:03:19 +0900,
Johannes Berg wrote:
> 
> Hi,
> 
> So I'm still a bit lost here with this, and what exactly you're doing in
> places.
> 
> For example, you simulate a single CPU ("depends on !SMP", and anyway
> UML only supports that right now), yet on the other hand do a *LOT* of
> extra work with lkl_sem, lkl_thread, lkl_mutex, and all that. It's not
> clear to me why? Are you trying to model kernel threads as actual
> userspace pthreads, but then run only one at a time by way of exclusive
> locking?
> 
> I think we probably need a bit more architecture introduction here in
> the cover letter or the documentation patch. The doc patch basically
> just explains what it does, but not how it does anything, or why it was
> done in this way.

We didn't write down the details, which are already described in the
LKL's paper (*1).  But I think we can extract/summarize some of
important information from the paper to the document so that the
design is more understandable.

*1 LKL's paper (pointer is also in the cover letter)
https://www.researchgate.net/profile/Nicolae_Tapus2/publication/224164682_LKL_The_Linux_kernel_library/links/02bfe50fd921ab4f7c000000.pdf

> For example, I'm asking myself:
>  * Why NOMMU? UML doesn't really do _much_ with memory protection unless
>    you add userspace, which you don't have.


My interpretation of MMU/NOMMU is like this;

With (emulated) MMU architecture you will have more smooth integration
with other subsystems of kernel tree, because some subsystems/features
are written with "#ifdef CONFIG_MMU".  While NOMMU doesn't, it will
bring a simplified design with better portability.

LKL takes rather to benefit better portability.

>  * Why pthreads and all? You already require jump_buf, so UML's
>    switch_threads() ought to be just fine for scheduling? It almost
>    seems like you're doing this just so you can serialize against "other
>    threads" (application threads), but wouldn't that trivially be
>    handled by the application? You could let it hook into switch_to() or
>    something, but why should a single "LKL" CPU ever require multiple
>    threads? Seems to me that the userspace could be required to
>    "lkl_run()" or so (vs. lkl_start()). Heck, you could even exit
>    lkl_run() every time you switch tasks in the kernel, and leave
>    scheduling the kernel vs. the application entirely up to the
>    application? (A trivial application would be simply doing something
>    like "while (1) { lkl_run(); pause(); }" mimicking the idle loop of
>    UML.

There is a description about this design choice in the LKL paper (*1);

  "implementations based on setjmp - longjmp require usage of a single
  stack space partitioned between all threads. As the Linux kernel
  uses deep stacks (especially in the VFS layer), in an environment
  with small stack sizes (e.g. inside another operating system's
  kernel) this will place a very low limit on the number of possible
  threads."

(from page 2, Section II, 2) Thread Support)

This is a reason of using pthread as a context primitive.

And instead of manually doing lkl_run() to schedule threads and
relying on host scheduler, LKL associates each kernel thread with a
host-provided semaphore so that Linux scheduler has a control of host
scheduler (prepared by pthread).

This is also described (and hasn't changed since then) in the paper *1
(from page 2, Section II, 3) Thread Switching).

> And - kind of the theme behind all these questions - why is this not
> making UML actually be a binary that uses LKL? If the design were like
> what I'm alluding to above, that should actually be possible? Why should
> it not be possible? Why would it not be desirable? (I'm actually
> thinking that might be really useful to some of the things I'm doing.)
> Yes, if the application actually supports userspace running then it has
> som limitations on what it can do (in particular wrt. signals etc.), but
> that could be documented and would be OK?

Let me try to describe how I think why not just generate liblinux.so
from current UML.

Making UML to build a library, which has been a long wanted features,
can be started;


I think there are several functions which the library offers;

- applications can link the library and call functions in the library
- the library will be used as a replacement of libc.a for syscall operations

to design that with UML, what we need to do are;

1) change Makefile to output liblinux.a
we faced linker script issue, which is related with generating
relocatable object in the middle.

2) make the linker-script clean with 2-stage build
we fix the linker issues of (1)

3) expose syscall as a function call
conflicts names (link-time and compile-time conflicts)

4) header rename, object localization
to fix the issue (3)

This is a common set of modifications to a library of UML.

Other parts are a choice of design, I believe.
Because a library is more _reusable_ than an executable (by it means), the
choice of LKL is to be portable, which the current UML doesn't pursue it
extensibly (focus on intel platforms).  Thus, 

5) memory: NOMMU
6) schedule (of irq/thread): pthread-based rather than setjmp/longjmp


Implementing with alternate options 5) and 6) (MMU, jmpbuf) diminishes
the strength of LKL, which we would like to avoid.  But as you
mentioned, nothing prevents us to implement the alternate options 5)
and 6) so, we can share the common part (1-4) if we will start to
implement.

I hope this makes it a bit clear, but let me know if you found
anything unclear.

-- Hajime
Johannes Berg March 16, 2021, 9:29 p.m. UTC | #3
Hi,

> First of all, thanks for all the comments to the patchset which has
> been a bit stale.  I'll reply them.

Yeah, sorry. I had it marked unread ("to look at") since you posted it.

> We didn't write down the details, which are already described in the
> LKL's paper (*1).  But I think we can extract/summarize some of
> important information from the paper to the document so that the
> design is more understandable.
> 
> *1 LKL's paper (pointer is also in the cover letter)
> https://www.researchgate.net/profile/Nicolae_Tapus2/publication/224164682_LKL_The_Linux_kernel_library/links/02bfe50fd921ab4f7c000000.pdf

OK, I guess I should take a look. Probably I never did, always thinking
that it was more of an overview than technical details and design
decisions.
> 
> My interpretation of MMU/NOMMU is like this;
> 
> With (emulated) MMU architecture you will have more smooth integration
> with other subsystems of kernel tree, because some subsystems/features
> are written with "#ifdef CONFIG_MMU".  While NOMMU doesn't, it will
> bring a simplified design with better portability.
> 
> LKL takes rather to benefit better portability.

I don't think it *matters* so much for portability? I mean, every system
under the sun is going to allow some kind of "mprotect", right? You
don't really want to port LKL to systems that don't have even that?

> >  * Why pthreads and all? You already require jump_buf, so UML's
> >    switch_threads() ought to be just fine for scheduling? It almost
> >    seems like you're doing this just so you can serialize against "other
> >    threads" (application threads), but wouldn't that trivially be
> >    handled by the application? You could let it hook into switch_to() or
> >    something, but why should a single "LKL" CPU ever require multiple
> >    threads? Seems to me that the userspace could be required to
> >    "lkl_run()" or so (vs. lkl_start()). Heck, you could even exit
> >    lkl_run() every time you switch tasks in the kernel, and leave
> >    scheduling the kernel vs. the application entirely up to the
> >    application? (A trivial application would be simply doing something
> >    like "while (1) { lkl_run(); pause(); }" mimicking the idle loop of
> >    UML.
> 
> There is a description about this design choice in the LKL paper (*1);
> 
>   "implementations based on setjmp - longjmp require usage of a single
>   stack space partitioned between all threads. As the Linux kernel
>   uses deep stacks (especially in the VFS layer), in an environment
>   with small stack sizes (e.g. inside another operating system's
>   kernel) this will place a very low limit on the number of possible
>   threads."
> 
> (from page 2, Section II, 2) Thread Support)
> 
> This is a reason of using pthread as a context primitive.

That impliciation (setjmp doesnt do stacks, so must use pthread) really
isn't true, you also have posix contexts or windows fibers. That would
probably be much easier to understands, since real threads imply that
you have actual concurrency, which _shouldn't_ be true in the case of
Linux emulated as being on a single CPU.

Perhaps that just means you chose the wrong abstraction.

In usfstl (something I've been working on) for example, we have an
abstraction called (execution) "contexts", and they can be implemented
using pthreads, fibers, or posix contexts, and you switch between them.

(see https://github.com/linux-test-project/usfstl/blob/main/src/ctx-common.c)

Using real pthreads implies that you have real threading, but then you
need access to real mutexes, etc.

If your abstraction was instead "switch context" then you could still
implement it using pthreads+mutexes, or you could implement it using
fibers on windows, or posix contexts - but you'd have a significantly
reduced API surface, since you'd only expose __switch_to() or similar,
and maybe a new stack allocation etc.

Additionally, I do wonder how UML does this now, it *does* use setjmp,
so are you saying it doesn't properly use the kernel stacks?

> And instead of manually doing lkl_run() to schedule threads and
> relying on host scheduler, LKL associates each kernel thread with a
> host-provided semaphore so that Linux scheduler has a control of host
> scheduler (prepared by pthread).

Right.

That's in line with what I did in my test framework in
https://github.com/linux-test-project/usfstl/blob/main/src/ctx-pthread.c

but like I said above, I think it's the wrong abstraction. Your
abstraction should be "switch context" (or "switch thread"), not dealing
with pthread, mutexes, etc.


> > And - kind of the theme behind all these questions - why is this not
> > making UML actually be a binary that uses LKL? If the design were like
> > what I'm alluding to above, that should actually be possible? Why should
> > it not be possible? Why would it not be desirable? (I'm actually
> > thinking that might be really useful to some of the things I'm doing.)
> > Yes, if the application actually supports userspace running then it has
> > som limitations on what it can do (in particular wrt. signals etc.), but
> > that could be documented and would be OK?
> 
> Let me try to describe how I think why not just generate liblinux.so
> from current UML.
> 
> Making UML to build a library, which has been a long wanted features,
> can be started;
> 
> 
> I think there are several functions which the library offers;
> 
> - applications can link the library and call functions in the library

Right.

> - the library will be used as a replacement of libc.a for syscall operations

Not sure I see this, is that really useful? I mean, most applications
don't live "standalone" in their own world? Dunno. Maybe it's useful.


> to design that with UML, what we need to do are;
> 
> 1) change Makefile to output liblinux.a

or liblinux.so, I guess, dynamic linking should be ok.

> we faced linker script issue, which is related with generating
> relocatable object in the middle.
> 
> 2) make the linker-script clean with 2-stage build
> we fix the linker issues of (1)
> 
> 3) expose syscall as a function call
> conflicts names (link-time and compile-time conflicts)
> 
> 4) header rename, object localization
> to fix the issue (3)
> 
> This is a common set of modifications to a library of UML.

All of this is just _build_ issues. It doesn't mean you couldn't take
some minimal code + liblinux.a and link it to get a "linux" equivalent
to the current UML?

TBH, I started thinking that it might be _really_ nice to be able to
write an application that's *not quite UML* but has all the properties
of UML built into it, i.e. can run userspace etc.

> Other parts are a choice of design, I believe.
> Because a library is more _reusable_ than an executable (by it means), the
> choice of LKL is to be portable, which the current UML doesn't pursue it
> extensibly (focus on intel platforms).
> 

I don't think this really conflicts.

You could have a liblinux.a/liblinux.so and some code that links it all
together to get "linux" (UML). Having userspace running inside the UML
(liblinux) might only be supported on x86 for now, MMU vs. NOMMU might
be something that's configurable at build time, and if you pick NOMMU
you cannot run userspace either, etc.

But conceptually, why wouldn't it be possible to have a liblinux.so that
*does* build with MMU and userspace support, and UML is a wrapper around
it?

> I hope this makes it a bit clear, but let me know if you found
> anything unclear.

See above, I guess :)

Thanks for all the discussion!

johannes
Octavian Purdila March 17, 2021, 2:03 p.m. UTC | #4
On Tue, Mar 16, 2021 at 11:29 PM Johannes Berg
<johannes@sipsolutions.net> wrote:
>
> Hi,

Hi Johannes,

> > My interpretation of MMU/NOMMU is like this;
> >
> > With (emulated) MMU architecture you will have more smooth integration
> > with other subsystems of kernel tree, because some subsystems/features
> > are written with "#ifdef CONFIG_MMU".  While NOMMU doesn't, it will
> > bring a simplified design with better portability.
> >
> > LKL takes rather to benefit better portability.
>
> I don't think it *matters* so much for portability? I mean, every system
> under the sun is going to allow some kind of "mprotect", right? You
> don't really want to port LKL to systems that don't have even that?
>

One use case where this matters are non OS environments such as
bootloaders [1], running on bare-bone hardware or kernel drivers [2,
3]. IMO it would be nice to keep these properties.

[1] https://www.freelists.org/post/linux-kernel-library/UEFI-LKL-port
[2] https://github.com/lkl/lkl-win-fsd
[3] https://www.haiku-os.org/tags/lkl-haiku-fsd/

> > >  * Why pthreads and all? You already require jump_buf, so UML's
> > >    switch_threads() ought to be just fine for scheduling? It almost
> > >    seems like you're doing this just so you can serialize against "other
> > >    threads" (application threads), but wouldn't that trivially be
> > >    handled by the application? You could let it hook into switch_to() or
> > >    something, but why should a single "LKL" CPU ever require multiple
> > >    threads? Seems to me that the userspace could be required to
> > >    "lkl_run()" or so (vs. lkl_start()). Heck, you could even exit
> > >    lkl_run() every time you switch tasks in the kernel, and leave
> > >    scheduling the kernel vs. the application entirely up to the
> > >    application? (A trivial application would be simply doing something
> > >    like "while (1) { lkl_run(); pause(); }" mimicking the idle loop of
> > >    UML.
> >
> > There is a description about this design choice in the LKL paper (*1);
> >
> >   "implementations based on setjmp - longjmp require usage of a single
> >   stack space partitioned between all threads. As the Linux kernel
> >   uses deep stacks (especially in the VFS layer), in an environment
> >   with small stack sizes (e.g. inside another operating system's
> >   kernel) this will place a very low limit on the number of possible
> >   threads."
> >
> > (from page 2, Section II, 2) Thread Support)
> >
> > This is a reason of using pthread as a context primitive.
>
> That impliciation (setjmp doesnt do stacks, so must use pthread) really
> isn't true, you also have posix contexts or windows fibers. That would
> probably be much easier to understands, since real threads imply that
> you have actual concurrency, which _shouldn't_ be true in the case of
> Linux emulated as being on a single CPU.
>
> Perhaps that just means you chose the wrong abstraction.
>
> In usfstl (something I've been working on) for example, we have an
> abstraction called (execution) "contexts", and they can be implemented
> using pthreads, fibers, or posix contexts, and you switch between them.
>
> (see https://github.com/linux-test-project/usfstl/blob/main/src/ctx-common.c)
>
> Using real pthreads implies that you have real threading, but then you
> need access to real mutexes, etc.
>
> If your abstraction was instead "switch context" then you could still
> implement it using pthreads+mutexes, or you could implement it using
> fibers on windows, or posix contexts - but you'd have a significantly
> reduced API surface, since you'd only expose __switch_to() or similar,
> and maybe a new stack allocation etc.
>

You are right. When I started the implementation for ucontext it was
obvious that it would be much simpler to have abstractions closer to
what Linux has (alloc, free and switch threads). But I never got to
finish that and then things went into a different direction.

> Additionally, I do wonder how UML does this now, it *does* use setjmp,
> so are you saying it doesn't properly use the kernel stacks?
>

To clarify a bit the statement in the paper, the context there was
that we should push the thread implementation to the
application/environment we run rather than providing "LKL" threads.
This was particularly important for running LKL in other OSes kernel
drivers. But you are right, we can use the switch abstraction and
implement it with threads and mutexes for those environments where it
helps.

> > to design that with UML, what we need to do are;
> >
> > 1) change Makefile to output liblinux.a
>
> or liblinux.so, I guess, dynamic linking should be ok.
>
> > we faced linker script issue, which is related with generating
> > relocatable object in the middle.
> >
> > 2) make the linker-script clean with 2-stage build
> > we fix the linker issues of (1)
> >
> > 3) expose syscall as a function call
> > conflicts names (link-time and compile-time conflicts)
> >
> > 4) header rename, object localization
> > to fix the issue (3)
> >
> > This is a common set of modifications to a library of UML.
>
> All of this is just _build_ issues. It doesn't mean you couldn't take
> some minimal code + liblinux.a and link it to get a "linux" equivalent
> to the current UML?
>
> TBH, I started thinking that it might be _really_ nice to be able to
> write an application that's *not quite UML* but has all the properties
> of UML built into it, i.e. can run userspace etc.
>
> > Other parts are a choice of design, I believe.
> > Because a library is more _reusable_ than an executable (by it means), the
> > choice of LKL is to be portable, which the current UML doesn't pursue it
> > extensibly (focus on intel platforms).
> >
>
> I don't think this really conflicts.
>
> You could have a liblinux.a/liblinux.so and some code that links it all
> together to get "linux" (UML). Having userspace running inside the UML
> (liblinux) might only be supported on x86 for now, MMU vs. NOMMU might
> be something that's configurable at build time, and if you pick NOMMU
> you cannot run userspace either, etc.
>
> But conceptually, why wouldn't it be possible to have a liblinux.so that
> *does* build with MMU and userspace support, and UML is a wrapper around
> it?
>

This is an interesting idea. Conceptually I think it is possible.
There are lots of details to be figured out before we do this. I think
that having a NOMMU version could be a good step in the right
direction, especially since I think a liblinux.so has more NOMMU
usecases than MMU usecases - but I haven't given too much thought to
the MMU usecases.
Johannes Berg March 17, 2021, 2:24 p.m. UTC | #5
Hi,

> One use case where this matters are non OS environments such as
> bootloaders [1], running on bare-bone hardware or kernel drivers [2,
> 3]. IMO it would be nice to keep these properties.

OK, that makes sense. Still, it seems it could be a compile-time
decision, and doesn't necessarily mean LKL has to be NOMMU, just that it
could support both?

I'm really trying to see if we can't get UML to be a user of LKL. IMHO
that would be good for the code, and even be good for LKL since then
it's maintained as part of UML as well, not "just" as its own use case.

> > If your abstraction was instead "switch context" then you could still
> > implement it using pthreads+mutexes, or you could implement it using
> > fibers on windows, or posix contexts - but you'd have a significantly
> > reduced API surface, since you'd only expose __switch_to() or similar,
> > and maybe a new stack allocation etc.
> 
> You are right. When I started the implementation for ucontext it was
> obvious that it would be much simpler to have abstractions closer to
> what Linux has (alloc, free and switch threads). But I never got to
> finish that and then things went into a different direction.

OK, sounds like you came to the same conclusion, more or less.

> > Additionally, I do wonder how UML does this now, it *does* use setjmp,
> > so are you saying it doesn't properly use the kernel stacks?
> > 
> 
> To clarify a bit the statement in the paper, the context there was
> that we should push the thread implementation to the
> application/environment we run rather than providing "LKL" threads.
> This was particularly important for running LKL in other OSes kernel
> drivers. But you are right, we can use the switch abstraction and
> implement it with threads and mutexes for those environments where it
> helps.

Right - like I pointed to USFSTL framework, you could have posix
ucontext, fiber and pthread at least, and obviously other things in
other environments (ThreadX anyone? ;-) )

> > But conceptually, why wouldn't it be possible to have a liblinux.so that
> > *does* build with MMU and userspace support, and UML is a wrapper around
> > it?
> > 
> 
> This is an interesting idea. Conceptually I think it is possible.
> There are lots of details to be figured out before we do this. I think
> that having a NOMMU version could be a good step in the right
> direction, especially since I think a liblinux.so has more NOMMU
> usecases than MMU usecases - but I haven't given too much thought to
> the MMU usecases.

Yeah, maybe UML would be the primary use case. I have been thinking that
there would be cases where you could combine kunit and having userspace
though, or unit-style testing but not with kunit which is "inside" the
kernel, but instead having the test code more "outside" the test kernel.
That's all kind of handwaving though and not really that crystallized in
my mind.

That said, I'm not entirely sure NOMMU would be the right path towards
this - if we do want to go this route it'll probably need changes in
both LKL and UML to converge to this point, and at least build it into
the abstractions.

For example the "idle" abstraction discussed elsewhere (is it part of
the app or part of the kernel?), or the thread discussion above (it is
part of the app but how is it implemented?) etc.

johannes
Hajime Tazaki March 18, 2021, 2:17 p.m. UTC | #6
Hello,

On Wed, 17 Mar 2021 23:24:14 +0900,
Johannes Berg wrote:
> 
> Hi,
> 
> > One use case where this matters are non OS environments such as
> > bootloaders [1], running on bare-bone hardware or kernel drivers [2,
> > 3]. IMO it would be nice to keep these properties.
> 
> OK, that makes sense. Still, it seems it could be a compile-time
> decision, and doesn't necessarily mean LKL has to be NOMMU, just that it
> could support both?
> 
> I'm really trying to see if we can't get UML to be a user of LKL. IMHO
> that would be good for the code, and even be good for LKL since then
> it's maintained as part of UML as well, not "just" as its own use case.
>
> > > If your abstraction was instead "switch context" then you could still
> > > implement it using pthreads+mutexes, or you could implement it using
> > > fibers on windows, or posix contexts - but you'd have a significantly
> > > reduced API surface, since you'd only expose __switch_to() or similar,
> > > and maybe a new stack allocation etc.
> > 
> > You are right. When I started the implementation for ucontext it was
> > obvious that it would be much simpler to have abstractions closer to
> > what Linux has (alloc, free and switch threads). But I never got to
> > finish that and then things went into a different direction.
> 
> OK, sounds like you came to the same conclusion, more or less.
>
> > > Additionally, I do wonder how UML does this now, it *does* use setjmp,
> > > so are you saying it doesn't properly use the kernel stacks?
> > > 
> > 
> > To clarify a bit the statement in the paper, the context there was
> > that we should push the thread implementation to the
> > application/environment we run rather than providing "LKL" threads.
> > This was particularly important for running LKL in other OSes kernel
> > drivers. But you are right, we can use the switch abstraction and
> > implement it with threads and mutexes for those environments where it
> > helps.
> 
> Right - like I pointed to USFSTL framework, you could have posix
> ucontext, fiber and pthread at least, and obviously other things in
> other environments (ThreadX anyone? ;-) )

I also have an idea for a ThreadX in future, which also implements
actual context in the application/environment/host side (not in kernel
side, as others do).  Though this environment may not provide
mprotect-like features, there is still a value that the application
can run Linux code (e.g., network stack) for instance.

# This story is about our old work of network simulation.
  https://lwn.net/Articles/639333/

> > > But conceptually, why wouldn't it be possible to have a liblinux.so that
> > > *does* build with MMU and userspace support, and UML is a wrapper around
> > > it?
> > > 
> > 
> > This is an interesting idea. Conceptually I think it is possible.
> > There are lots of details to be figured out before we do this. I think
> > that having a NOMMU version could be a good step in the right
> > direction, especially since I think a liblinux.so has more NOMMU
> > usecases than MMU usecases - but I haven't given too much thought to
> > the MMU usecases.
> 
> Yeah, maybe UML would be the primary use case. I have been thinking that
> there would be cases where you could combine kunit and having userspace
> though, or unit-style testing but not with kunit which is "inside" the
> kernel, but instead having the test code more "outside" the test kernel.
> That's all kind of handwaving though and not really that crystallized in
> my mind.
> 
> That said, I'm not entirely sure NOMMU would be the right path towards
> this - if we do want to go this route it'll probably need changes in
> both LKL and UML to converge to this point, and at least build it into
> the abstractions.
> 
> For example the "idle" abstraction discussed elsewhere (is it part of
> the app or part of the kernel?), or the thread discussion above (it is
> part of the app but how is it implemented?) etc.

I agree that LKL (or the library mode) can conceptually offer both
NOMMU/MMU capabilities.

I also think that NOMMU library could be the first step and a minimum
product as MMU implementation may involve a lot of refactoring which
may need more consideration to the current codebase.

We tried with MMU mode library, by sharing build system
(Kconfig/Makefile) and runtime facilities (thread/irq/memory).  But,
we could only do share irq handling for this first step.

When we implement the MMU mode library in future, we may come up with
another abstraction/refactoring into the UML design, which could be a
good outcome.  But I think it is beyond the minimum given (already)
big changes with the current patchset.

-- Hajime
Johannes Berg March 18, 2021, 4:28 p.m. UTC | #7
Hi,

> I also have an idea for a ThreadX in future, which also implements
> actual context in the application/environment/host side (not in kernel
> side, as others do).  Though this environment may not provide
> mprotect-like features, there is still a value that the application
> can run Linux code (e.g., network stack) for instance.

Heh. Right.

> I agree that LKL (or the library mode) can conceptually offer both
> NOMMU/MMU capabilities.
> 
> I also think that NOMMU library could be the first step and a minimum
> product as MMU implementation may involve a lot of refactoring which
> may need more consideration to the current codebase.
> 
> We tried with MMU mode library, by sharing build system
> (Kconfig/Makefile) and runtime facilities (thread/irq/memory).  But,
> we could only do share irq handling for this first step.
> 
> When we implement the MMU mode library in future, we may come up with
> another abstraction/refactoring into the UML design, which could be a
> good outcome.  But I think it is beyond the minimum given (already)
> big changes with the current patchset.

Well, arguably that depends on how you look at it.

Understandably, you're looking at this from the POV of getting an "MVP"
(minimum viable product) into mainline as soon as possible.

I can understand why you would do that, and this patchset achieves it:
you get an LKL in mainline that's useful, even if it doesn't achieve the
best possible architecture and code sharing.

But look at it from the opposite side, from mainline's view (at least in
my opinion, others may disagree): getting an LKL (whether as an MVP or
not) isn't really that important! Getting the architecture and code
sharing right are likely the *primary* goals for mainline this
integration.

So from my POV it's *more important* to get the shared facilities,
proper abstraction and refactoring right, likely to the point where UML
is actually "small binary using the library" (in some fashion). Even if
that initially means there actually *won't* be NOMMU mode and a library
that's useful for the LKL use cases.

Yes, that's the longer road into mainline, but it also means that each
step along the way is actually useful to mainline, I'm assuming here
that the necessary code refactoring, abstraction, etc. will by itself
provide some value to UML, but given the messy state it's in, I think
that's almost certainly going to be true.

So a sense "getting LKL into UML" is at odds with "get LKL working
quickly". However, doing it this way may ultimately get it into mainline
faster because it's a much easier incremental route. Say you want to get
all this thread stuff out of the way that we discussed - then if you
need to keep UML working but *using* the abstraction you're adding (in
order to work towards the goal of it using the library) then it becomes
fairly obvious that you cannot use the abstraction that you have with
pthreads, mutexes, and semaphores exposed via APIs, but need to build
the API on "thread switching" primitives instead. I would expect similar
things to be true for other places.


Now, are you/we up for that? I don't know. On the one hand, I know
you're persistent and interested in this, but on the other hand it's
somewhat at odds with your goals. I believe for mainline it'd be better
because the code is no worse off each step along the way.

Taking the thread example again, if we have a thread switching
abstraction and an implementation in UML, worst case (e.g. if you lose
interest) is that it's a somewhat pointless abstraction there, but it
doesn't really make the code significantly worse or more complex.

OTOH, having what we have now with pthreads/mutexes/semaphores *does*
make the code significantly more complex and harder to maintain (IMHO)
because it adds all kinds of special cases, and they're somewhat more
difficult to exercise (yes, there are examples, still).


In any case, I don't think I'm the one making the decisions here, so
take this with a grain of salt.

johannes