Conversation

Making a novel OS design is hard (Redox is trying) but if you are aiming to match the design of an established system and maximize compatibility then it's much, much easier.

2
0
0

Lucy [hiatus era] 𒌋𒁯

@drewdevault they're rust developers. their greatest motivation is telling people to use rust, they don't build anything really.
0
2
5

@drewdevault I'm inclined to believe you, but isn't there a case to be made that trying to stay in-tree is the "noble" course of action because it avoid splintering off development effort and forcing users to make a tough choice of when to adopt the new solution?

1
0
0

@convexer it depends on the roadmap and target audience. Rust-in-Linux is an attractive target because everyone already uses Linux and they can get Rust into everything by making their play there. But it's an uphill battle and I think it's overall a bad idea; I blogged about this before and my opinions have been refined a bit since (more thought + watching how it's developed).

Making a new OS which is Linux-compatible would be much, much easier and faster.

0
0
1

I think there needs to be some time to heal but the Rust-in-Linux folks should really rally around a new Linux-compatible kernel project from scratch. It'd be wild. It would probably actually work, whereas Rust-in-Linux is still a huge gamble and a burnout machine.

2
0
0

Rust would be a really good fit for a big Linux-style monolithic kernel and they could realize a lot of gains very, very quickly IMO. Not a great fit when retrofitted onto the Linux establishment.

2
0
0

@drewdevault I wonder how long it would take for a rust kernel to compile when it's the same size as the linux kernel.

2
0
1

@drewdevault
Who would use it? What would the selling point be for users? It being in rust is, by itself not one; nobody cares what language is being used. And while the number of bugs would drop, the kernel is already fairly stable for its users.

They'd need to have some other, positive reason to go through the pain and uncertainty of switching.

1
0
0

@jannem this is where thinking about your target audience is important. It would make a lot more sense to appeal to, say, a datacenter use-case first than a desktop user. Or a mobile use-case, where the vendor can ensure that all of the drivers they need for their device are present. General-purpose comes later.

1
0
0

@drewdevault Linux-compatible as in exposing the same ABI to userspace? Because then wouldn't you still forfeit all the C drivers and with that the broad hardware compatibility? (without getting into distributions, etc.)

1
0
0

Anyone coming in to @ me with bad-faith insults targeting Rust developers is cruisin' for a blockin'

2
0
0

@drewdevault although if you RIIR Linux you could call it Rinux 🤔

1
0
0

@drewdevault would you have to RIIR glibc as well to keep compatible? that would be awesome

or has someone already RIIR'd glibc

1
0
0

@aeva RiiR glibc is a separate concern from implementing the Linux ABI

0
0
0

@drewdevault Good point. But those use cases also need a positive reason to pick it.

I work in HPC - a datacenter field - and just the thought of having to deal with kernel level compatibility problems for user code is enough to dismiss it out of hand.

For mobile, perhaps a more permissive license might do it. But then you end up with the drawbacks of permissive licenses too...

0
0
0

@drewdevault I feel the driver situation could still kill such an effort, unless you can get a lot of important players on board to develop the Rust drivers too.

I wonder what would happen, if instead Rust In Linux did a harder fork, and stopped playing LKML politics with their tree, and be able to pull drivers from mainline, while they rewrite various subsystems in Rust

2
0
0

@Sobex I think this is a fool's errand to be honest. The kernel has too much churn for this to be realistic.

0
0
0

@drewdevault Is there a platform where the amount of work to do for drivers is manageable for a from scratch linux replacement ? (And distro could then pick between the rinux or the linux kernel on that platform, and then extend the work to other modern platforms one at a time)

Or are there a lot of drivers that can be used on most platforms, and that would need to get RIIR before such a kernel could replace linux ?

0
0
0

I'd like everyone to shut the hell up about driver compatibility. Seriously. It was an incredibly fucking annoying argument when Linux didn't have a competitive driver suite, too, and somehow we still managed to get to the point where Linux does have great driver support.

Fuck's sake. You write the drivers you need and then use the hardware that's supported, and if you want to support new hardware you write new drivers, you do not need to implement every driver overnight.

2
0
2

Lucy [hiatus era] 𒌋𒁯

0
0
0

@feld not really. I re-iterate my request for everyone to shut up about drivers already

1
0
0

@drewdevault @lucy @duncan_bayne In fact for a kernel I'm pretty sure AGPL "13. Remote Network Interaction" would mean everyone who made modifications would be violating it the moment there's network drivers, or would be rendered nil and so would just be GPLv3, albeit a confusing one.

Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version […]

1
0
0

@drewdevault especially for something niche starting with basic platform drivers, nvme (or alternative like ahci), and basic framebuffer (uefi based?) support, should be plenty for a lot of interesting projects, and provided it is architectured correctly a pretty good base for further expansion

1
0
0

@deetwenty exactly. 100%

If you get a decent filesystem and a few network drivers (even virtio) and good POSIX support you're in the datacenter too

0
0
1
@lanodan @duncan_bayne @drewdevault > who made modifications would be violating it the moment there's network drivers
i think that's actually good and should be as intended
1
0
0

Case-study: Tilck

https://github.com/vvaltchev/tilck

Written by one person mostly over the course of about three years in their spare time and it's ABI compatible enough with Linux to run busybox, vim, tcc, lua, micropython, fbdoom -- from unmodified Linux binaries, not source ports

1
0
1

Another one: Managarm

https://github.com/managarm/managarm

Does not aim for ABI compatibility with Linux but has implemented a lot of Linux-specific APIs to great success, including DRM (Direct Rendering Manager), epoll, signalfd, etc, and is capable of running software like Sway

Small group of 4-5 principal contributors working slowly but surely since 2016

1
0
0
"I'd like everyone to shut the hell up about the elephant in the room."

according to tokei, for linux 6.7-rc2 (what I have on hand), the entire repo is 28033438 lines of code (that's right, 28 million)
the drivers directory alone is 17768045 lines (17.8 million)
that's 63.6%, or almost TWO THIRDS of the entire repo

the combination of kernel/ and mm/ doesn't even hit half a million
the network stack is over 900K, the sound subsystem is 1.2M, fs/ too
arch/ kisses 3M
drivers/gpu/drm/amd/ is 4.6M (over a quarter of the entire driver tree!!!)
the actual kernel is the easy part and is absolutely dwarfed in size by everything that makes it useful

take a look at the linked examples and check out the hardware they claim to support: pretty much all of it is hardware emulated by qemu, this is still very much a toy OS that has no chance of running on real hardware

it took linux over 30 years to get where it is currently
granted, we've learned a lot in those 30 years, so a new kernel wouldn't take as long to catch up, but it would still take a considerable amount of time, and every minute spent catching up is a minute linux keeps moving forward
a huge portion of those drivers are not really useful anymore, but there is also substantially more hardware that needs to be supported now than there was 30 years ago when the PC was the only relevant platform for a project like linux

the point of rust in linux is to enable incremental improvement, rather than starting over from scratch
a new kernel would have to attract the contributors needed to reach escape velocity while it remains irrelevant for many years
that's a tough sell
2
2
4
@lucy @duncan_bayne @drewdevault Protestant/Orthodox/… TempleOS license?
1
0
0
@drewdevault Also Fiwix: https://www.fiwix.org/

Discovered it via the live-bootstrap project as it can build with a standard C Compiler (most notably TCC) and is effectively Linux-compatible enough to build and run all the software you'd need to bootstrap your average Linux distro.
Also it's Linux 2.x ABI compatible.
0
0
0
@novenary also the driver support has been largely driven by growing commercial interest in linux over the years, trying to get that much corporate interest in a new kernel is going to be an uphill battle
1
0
0
@mikoto @novenary if we're also want to keep kernel free software, yeah, that would be tough.

Imagine what will happen when vendors will just take permissively licensed kernel, modify it to run on their hardware and never release the source.
0
0
0
@novenary Well the linux-drm part also just gets imported by the BSDs so that quarter of the code doesn't have to be rewritten (unless you'd want pure-Rust but IIRC not even Redox does that), network drivers pretty much everyone took at least some of them from BSDs (including Linux).
1
0
0
@lanodan yeah definitely, but then you're back to cross-kernel collaboration and people will still want to write their drivers in rust so cirno_shrug
1
0
0
@novenary Which then they quite freely could? While now they effectively somehow need to negotiate with subsystem maintainers so they accept to review Rust code.
That said part of the problem will be API coordination, but that's a lot less of politics than introducing a new language to maintainers.
1
0
0
@lanodan the API problem is pretty big though isn't it? like that's the whole issue with out of tree modules in general
1
0
0
@novenary That one I don't know, would effectively need to ask a BSD dev but wouldn't be surprised that the churn for APIs between subsystems as a whole is very low (after all multiple groups of maintainers to coordinate and technically cross-repos) while in-subsystem has a lot more modifications and that's the problem you get with third-party modules.

That said here the problems I had with third-party modules is them just not being maintained at all. While OpenZFS for example hasn't been a problem to me at least with running latest longterm and occasionally stable.
0
0
0

@martijnbraam @drewdevault Even if 10x or 100x, really irrelevant. The reason to use Rust is memory safety at runtime, not compilation time

2
0
0

@drewdevault @martijnbraam If you had a C linux kernel with demonstrable bugs vs. a different kernel not suffering from those bugs, while running equally efficiently, I personally would not care one bit having to spend one or two orders of magnitude more time compiling this more stable kernel

1
0
0

@raulinbonn @drewdevault @martijnbraam Obviously in an 'all else equal' scenario you would choose the kernel that has a lower chance of runtime issues, but the thing is that one or two orders of magnitude (in terms of compile time) may affect development/iteration time to such a degree that a Linux-sized kernel written in Rust would move forward very, very slowly.

I don't know how bad/good the situation is in Rust nowadays in terms of compile times, though, just speaking hypothetically here.

0
0
0

@m I'm enthusiastic about Redox but it's a huge gamble and it will require a lot of highly specialized work and frequent revisions of fundamental design assumptions before it starts getting anywhere useful -- and once it gets there, it's not necessarily certain that anyone will use it for anything. It's a novel OS research project. Whereas using the Linux kernel design from the outset gives you a fixed scope and saves you all of the research work and we know it works when it's done.

1
0
1