This project is a Mastodon client written in Visual Basic 6. It works on Windows 95 and higher (Windows 10/11 untested).
Yes. Belgian developer Maartje Eyskens made a Mastodon client for Windows 9x. Amazing.
This project is a Mastodon client written in Visual Basic 6. It works on Windows 95 and higher (Windows 10/11 untested).
Yes. Belgian developer Maartje Eyskens made a Mastodon client for Windows 9x. Amazing.
My first thought is – does it or could it be made to run under Win32s on WFWG3.11 with the Microsoft TCP/IP-32 stack? That would be truly amazing to see…
It looks to be using the WinInet HTTP stack, not the raw TCP/IP stack. WinInet was never made available as a platform feature for 16 bit, and the 32 bit one appears to be multithreaded, so I don’t think it could work on Win32s.
As an aside, WinInet was introduced with IE 3 and was typically bundled with IE, and IE 3 required Windows 95 or NT 4. (The 16 bit versions didn’t include WinInet.) However, there was a strange ActiveX redistributable that included IE 3’s WinInet for NT 3.5x, which could be used to support those releases. Note also that IE 3’s WinInet is…early…and requires code to be written for it fairly explicitly.
malxau,
Of course there were lots of TCP stacks back in they day before microsoft added one to windows. They should all still work, but I can’t imagine there’s any real audience for it, haha.
I did not like the way MS ported berkley sockets to windows. On unix sockets are real file descriptors and can be processed by standard syscalls, whereas on windows it feels like a bit of a rushed hack. IMHO it feels very out of place next to the regular windows APIs.
Note the relationship between these two points: a handle is a kernel level object, but Winsock is a usermode DLL that could be provided by anybody. Using a file descriptor would not have allowed this to be retrofitted to existing systems.
The BSD-style API on Winsock is “out of place”, because the original Winsock was WSAAsyncSelect et al. 16 bit programs couldn’t synchronously wait for IO across a dial up line, because they have no background threads and couldn’t pump UI messages, so doing this would hang the system. Winsock was an attempt to make sockets asynchronous using something compatible with the UI message pump. The synchronous BSD functions were layered on top, documented to be implemented by pumping UI messages, so they block the caller but don’t block the UI; see WSASetBlockingHook. Rushed hack or not, 16 bit Windows was not friendly to synchronous calls, so having an _lread or similar that doesn’t pump UI messages isn’t going to work.
So yeah, it turns out taking blocking APIs from a system with preemptive scheduling and putting them on a cooperative scheduling system means they’re out of place đ
It’s true that in 32 bit land, it could have been revisited, but the goal by that time was to maximize code compatibility between 16 and 32 bit versions. The documentation for WSASetBlockingHook now tells you to use threads, but when this was designed, threads didn’t exist.
malxau,
I’m well aware of that relationship. That’s what makes the windows implementation feel like such a hack compared to unix.
The funny thing is I don’t mind the BSD API on unix, which integrates well with the system as a whole. I also don’t mind the native win32 primitives and event loops which work well on windows. It’s the amalgamation of these two API models that bugs me every time. IMHO the results leave a sour taste for windows socket development – at least in low level code đ
Alfman,
On UNIX, objects exist in a named heirarchy. If you want to put something where others can access it, you’d use creat(). Except for sockets, where you use listen().
If you want to access a resource somebody else put there, you’d use open(). Except for sockets, where you use connect().
If you want to list the things that exist, you’d use readdir(). Except for sockets, where you go run netstat.
If you want to change some option on your file descriptor, you’d use fcntl(). Except for sockets, where you use setsockopt().
It’s true that you can use file descriptor functions on open sockets – sometimes. If the socket supports a stream, maybe you could use read(). But not all sockets are streams, which is why we have recvfrom(), and read() didn’t have enough semantics for sockets anyway, so we have recv().
But don’t worry, you can call read() on a datagram. It’ll just silently truncate data. If it’s a stream and the file descriptor is nonblocking, it might return EAGAIN, but if it’s a socket, EWOULDBLOCK.
Pipes are kind of like sockets, being non-seekable, unnamed streams. Except in traditional unix, pipes are unidirectional – if you want to communicate back, get a second one. Sockets are bidirectional, even though each direction really needs very independent state, and obviously if you call write() then read() you won’t see what you wrote. recv() isn’t for pipes, although its options seem widely applicable there.
Sockets are integrated in unix the sense that on any unix system the base API exists and you can use it, but there’s a very clear line between the POSIX file API and the sockets API, and they were clearly designed by different people. The sockets guys did not believe in “everything is a file” – the moment you call socket(), you’re operating in a namespace divorced from the rest of the system. But you might – maybe – be able to use file functions for a subset of sockets.
In hindsight I’m surprised that this model of sockets ended up in BSD at all. Today, people just learn the differences between sockets and files, use both, and don’t think too much about it. But the philosophical underpinnings are so different that it’s hard to imagine a group of architects being in favor of both.
malxau,
You create a socket with “socket()” and open a file with “open()”. Do you think they should be overloaded into one function? I think two functions makes sense here, but either way I think the bigger issue with windows is that sockets break with a clean unified event model, descriptors are incompatible, heck even the error handling is different. All these differences then propagate into userspace software and APIs that are forced to handle them differently. I find the unix approach much more elegant and find that it gives us great flexibility to use the same polling mechanism for all IO. You can very efficiently pipe between files, block devices, pipes, process I/O, tcp&unix sockets because the file descriptor tables and primitives are shared across them. Tons of real world applications benefit from this like ssh and high performance nginx application servers. Huge amounts of data can be piped directly from one file descriptor to another and it can all be done in the kernel without requiring special handling of sockets in userspace.
Yes I am not denying there are special syscalls that don’t all apply to all file descripters, there are many more like pread, lseek, fctl, setsockopt, etc. But so what? The fact that a file doesn’t support setsockoptIt doesn’t really detract from the benefits of being able to pipe data from one to the other. It doesn’t change my opinion that windows failed to capture the benefits of BSD sockets when it was ported to windows. Anyway, please know that I respect your experience even if we disagree about the quality of windows socket API compared with unix đ
Alfman,
The reason for all the earlier examples is to point out that the APIs and error handling is different on UNIX too, and userspace software will inevitably have to face it.
I think this whole thing is a misunderstanding. Winsock originally couldn’t do this, because it made no sense in cooperatively multitasked 16 bit Windows. Winsock 2.0 comes later, and allows you to do exactly this. Unfortunately it looks like they were still trying to support 16 bit (which I don’t think ever actually shipped?) and that leads to confusing documentation. The important part is the final paragraph in https://learn.microsoft.com/en-us/windows/win32/winsock/event-objects-2 – which is also visible in winsock2.h. Once you bind an event to a socket operation of interest, you have a wait primitive that can coexist with wait primitives from different object types, because WaitForMultipleObjects() takes any signal-able handle.
Up to a point, although I think the kernel ends up having to treat each differently and is putting on a show to pretend they’re interchangeable. Some of these need to use unaligned bytes, others need to use sectors, for example, and you can create conditions that force reading unaligned bytes and writing sectors, and the kernel is either going to fail or do some fancy buffer management. You might say this doesn’t matter, but what’s really happening is there are different types, and the kernel is managing a matrix for you.
But once it’s clear that there are different types, there’s no reason that data copies can’t be offloaded to kernel while keeping types distinct. See TransmitFile().
That said, I thought the rise of SSL/TLS killed the idea that offloading these copies to kernel was the right approach.
malxau,
When a socket primitive on windows returns a SOCKET_ERROR, you have to use WSAGetLastError() whereas normal windows functions you use GetLastError(). You have to remember to use the right one otherwise you’ll be reading the wrong error. But on unix all the kernel operations use the errno mechanism regardless of if a file descriptor refers to a file/socket/terminal/whatever.
I disagree with this premise. When an application blocks for IO, be it sockets or anything else, it can and should yield to the OS and other processes. There’s no fundamental incompatibility with BSD sockets and cooperative multitasking in general. Limitations may have arisen from microsoft’s decision to implement sockets as a userspace DLL rather than an operating system primitive, but that’s on them. A native kernel implementation wouldn’t have had problems even with cooperative multitasking. A BSD application can do whatever work it needs to and then pass the baton while waiting for IO. This idea still works with cooperative multitasking with non-interruptible code on a single CPU.
Yes I know this and have done this, but it requires additional levels of indirection that aren’t required using file descriptors on unix.
Yes, they’re polymorphic and my point is that it’s very handy to have that interchangeable behavior in unix applications. I’d even say it is one of unix’s claims to fame: having unix tools that work across sockets, files, disks, pipes, etc. We don’t want separate tools and functions to do the exact same task on different types of resources. BSD sockets on unix make it easy and efficient to treat file descriptors generically. On unix this is considered advantageous over incompatible primitives that would require separate implementations.
On unix, it’s still quite practical to pipe plaintext into an openssl process to provide TLS functionality such that the application barely needs to be changed at all. Of course you might have more IPC overhead if you do it that way, but it’s still pretty neat.
I get we might not be able to agree, but can you understand why someone might prefer the way sockets are unified on unix even though you don’t like it?
Alfman,
Fair point. The other piece of context is that at the time Windows was layered on DOS, DOS provided file I/O, and DOS was synchronous. Ultimately, that means any file I/O prevents message pumping, which is bad, but at least file I/O completes in milliseconds where dial-up receives take seconds. I agree that there’s an alternate universe where the file layer was a lot more modern, and both files and sockets could complete asynchronously allowing a cooperative message pump to operate. But once we built a new kernel for async file IO, there was no reason to keep using cooperative scheduling.
Subjective obviously, but I really like what they did here, even though it implies more code. The original NT model said that many different handle types could be waited on, which is fine, but it assumes there’s exactly one condition for each object type that breaks the wait. select() realizes there are multiple conditions, so it creates exactly three arguments for exactly three conditions. The model in Winsock 2, if extended to other object types, would allow an application to specifically indicate the wait condition across different object types – as in, wait for this process to suspend, wait for that window to minimize, wait for that file to be deleted, etc.
I don’t think this part happened. Many tools on Unix take an argument, which is interpreted as a file and can’t refer to a socket (due to the open()/connect() thing.) You could give an argument that’s a disk, but the tool needs special code to understand the sector requirements for that. If you don’t give an argument, the program might treat stdin like a pipe, and programs end up with isatty() etc to check whether it actually got a pipe or not, In each of these three types, it’s the tool that’s explicitly supporting them, not that the OS makes them transparent. The fourth, sockets, are generally not supported by tools directly.
I think what you’re implying is that it’s possible for one process to open a socket, set stdin/stdout, and spawn a child, and have that child operate on a socket as if it was a pipe with no socket awareness. That’s true enough, and it’s the origin of the original inetd, although today I’d be very skeptical that an inetd child doesn’t know it’s talking to a socket and use explicit socket functions. But even here, it looks like modern systems set up their own listening sockets, and don’t use inetd. (I checked on the system I’m typing this from – inetd has no services configured and isn’t running.)
Right, but that seems to negate the benefit of kernel copies, no? All the data has to be passed up to usermode to be translated, so the user program ends up calling write() on a different descriptor once it decrypts data, and there’s no role for kernel offload?
To be clear, I respect you and consider this a good conversation. I’m just not sure I agree with the fundamental premise that Unix ever made sockets transparent, and where it tried (inetd) it ended up moving away from that anyway. If it really was true that I could just run a BSD cat on a socket and not need a totally different nc instead, then I’d agree with the point.
The big unification, IMHO, is that select() could use non-socket descriptors, which is kind of cute. Windows later did WaitForMultipleObjects, and there’s a whole conversation about the strengths and weaknesses of each. But even the idea that read() and write() can be used on sockets seems questionable – I mean, they can, but a program is highly likely going to need a more specialized function sooner or later, so the benefit of having that support is limited. It’s more of a transition path for existing code than a unified long term approach.
My first idea was that’s for “Windows for Workgroups 3.11”.
3.11 was probably 1st “Windows” for guys growing up in MS-DOS days.
98SE was also very nice release, with beautiful pixe-art hi-color icons (there was bug)
Probably my favourite Windows ever. As MS-DOS guy I hated ME and filrted with Mandrake Linux right then and dual booted until the Windows XP
“3.11 was probably 1st âWindowsâ for guys growing up in MS-DOS days.”
My first computer capable of running MS Windows had MS Windows 3.0. I updated it to 3.1. I’ve never used 3.11 because I never was in a workgroup back in that times.
Kinda Interesting, but even more interesting is the https://github.com/atauenis/webone http proxy, I need to get one of those going for the same retro reasons. Modern TLS doesn’t’ work on old systems. By far the best way of dealing with that problem.
https://github.com/classilla/cryanc
Might be viable in some instances also.
Was gonna say that this was invented to distract George RR Martin, but then discovered he worked on a DOS machine.