PaulOS is a low-latency, single-threaded embedded operating system for 16, 32, and 64-bit microprocessors. It is written to allow applications to be developed under GNU/Linux or FreeBSD and then recompiled for the target platform. It features POSIX file descriptors, a TCP stack (LwIP) with BSD socket API, an ANSI C library, and a DNS resolver library. A number of GNU/Linux network applications have already been ported.
David – that made no sense.
Re: the article – 6 months to develop… wow! You would have to be a serious little code monkey to get that much done in 6 months. Nice indepth site for the build-your-own-embedded OS types
Sorry to be off topic, but being patriotic, especially now that I am in the US studying, it was pretty cool reading about this OS and how rapidly it was put together.
Thats the good thing about this site, the more obscure projects get mentioned. Otherwise I would never have heard about it.
I don’t get a word of it…
It’s too hardcore for me.
Same here, but hey it’s cool anyway
i wonder how much those embedded board things costs ?
I want one of those. A little embedded device that plugs into a network and an external modem and serves the Internet to all your computers.
Right now, I use server software to accomplish the same thing. This looks like it would much more stable, not to mention less vulnerable to attack from hackers.
Where do I get one?
I too like the fact that “simple” operations like web serving, routing, and data manipulation is being handed off to smaller and smaller machines or devices.
However.
It’s pretty easy to hijack the server once it fits in your shirt pocket. Are we moving towards more social engineering, when what used to run on your desk now fits next to the secretary’s phone at the office?
“Say dear… mind if I steal your web server while you get me a coffee? I’m waiting to see your boss…”
๐
Dunno… Smaller is better for -some- things. What about adding more capabilities to the system? How hard can you crank on a server this small before it breaks? Storage, network access across multiple subnets, database access, and so on…
Its a good thing to evaluate expressions with NULL. It makes the code easier to read for someone less experienced, and the compiler will probably optimize it out anyway.
thanks for making this opensource, I will be forwarding it to friends who make robots for a hobby. Ideal!
yeah, I agree, the NULL stuff on his page has more to do with style than anything else. Personally, I think it makes alot more sense to use NULL for everything that’s not an int, long, double or short.
And I do sometimes do stuff like
if(foo() != NULL)
rather than
if(foo())
sure, it takes up less space, but if there’s alot of other comparisons around it can make it easier to read. And no sane compiler would treat those statements differently.
I hardly understood any of that… But it seems safe to assume there are no desktop icons.
Server Software..?! All you need is a 486 Intel box without HD and run a Linux floppy router on it.
Server Software..?! All you need is a 486 Intel box without HD and run a Linux floppy router on it.
486, box that fits in the palm of my hand…
that’s a tough choice, there.
Personally I think it is bad idea not to be explicit.
A lot of the time recursion is also far more elegant, but it is not as maintainable as using regular iteration. My point is that just because something is elegant doesn’t make it a good idea always.
Most people tend to think that pointer comparisons are ok. I remember this once breaking so I always cast pointers to `unsigned long’ before making a comparison.
<p>
Anyone have any more info about pointer comparisons ‘breaking’? Or is Paul just being superstitious here?
Since pointer comparisons are a part of ANSI C, them breaking would mean whatever compiler you were using was borked. The only issue with pointers I’ve ever heard is this:
long function(long param);
long function(void * param);
If you call function(NULL) the first one gets called instead of the second, because NULL is usually defined to be (0) or (0L). Even then, this doesn’t happen in GCC, because NULL is defined as __null, which is always guarenteed to be a pointer type. Also, this only happens in C++ anyway, and this OS seems to be written in C.
> And no sane compiler would treat these comments differently.
They cannot. Its precise behavior is written in the Most Holy Standard. ๐
> Personally I think it is a bad idea not to be explicit.
You like being explicit? Here is a much more explicit version of your statement:
Personally I who am posting this comment think with my brain connected to my eyes viewing this content that it which is what we are talking about is a bad (meaning: undesirable) idea which is something we think about not to be explicit.
Useless garbage. Or nearly useless, since it illustrates that explicitness is not always desirable. Explicitness is desirable when it serves a useful purpose.
> A lot of the time recursion is also far more elegant, but it is not as
> maintainable as using regular iteration.
Bad analogy.
> My point is that just because something is elegant doesn’t make it
> a good idea always.
That is good, but you have not really said why relying on the well-defined conversion rules is not a good thing. The following four examples all do the same thing in C++:
if (p == NULL) do_something ();
if (p == 0) do_something ();
if (!p) do_something ();
if (not p) do_something ();
Personally I prefer to use (p) and (not p) to check for the validity of my pointers because it indicates whether the memory being pointed to exists. I do not like comparing the pointers to magic constants such as the literal zero and the constant NULL. I like:
if (not buffer) buffer = new char [buffer_size];
or better yet in C++:
buffer.resize (new_buffer_size);
A rather detailed debate has already occurred here:
http://www.gamedev.net/community/forums/topic.asp?whichpage=1&pages…
> Anyone have any more info about pointer comparisons ‘breaking’? Or is
> Paul just being superstitious here?
Paul is just being superstitious.
Someone should buy Paul K&R. He has developed a bunch of wacky behaviours that seem to stem from
1. Not understanding that C++ and C are only superficially similar languages, Stroustrup having deliberately broken many elegant C features in the name of OO. So Paul gets confused about NULL and 0 and about type size rules
2. The old “all the world’s a Vax” syndrome where “reasonable assumption” means “it works on my machine”. The unsigned long type and the pointer type have nothing useful in common. Casting between them for arithmetic is perverse and will definitely break on some real architectures. Just because they’re both 32-bit on your test PC means nothing.
3. Not realising that pointers are typed. Paul’s “problems” with pointer conversaion are probably related to either not knowing that foo++; increments foo by sizeof(*foo) or not knowing that out-of-bounds comparisons are illegal — you can only compare within a single allocation, except arrays where you can compare with the array and one “overflow” element. All other results are undefined.
Still, he has built a cool toy.
Althought it seems to be good for simple hardware tho..
I’m still looking for some free/opensource low latency OS, which supports the most common hardware (serial and USB),
and a soundcard.. so I can make me a sampler box!
Interesting. The C++ standard requires that NULL be an integer, it explicitly cannot be a pointer (18.1, 4.10). This allows to assign NULL to any pointer without cast, and to avoid cv-qual issues as well.
An overload void f(long); void f(void*); and a call to f(NULL) is guaranteed to call the long version. I guess that gcc is buggy (that’s not surprising, is it?)
However, given void f(long); void f(int); there’s no guarantee about which one gets called with a call to f(NULL).
JBQ