General Development Archive
To argue that Objective-C resembles a metaphysically divine language, or even a good language, is like saying Shakespeare is best appreciated in pig latin. Objective-C is, at best, polarizing. Ridiculed for its unrelenting verbosity and peculiar square brackets, it is used only for building Mac and iPhone apps and would have faded into obscurity in the early 1990s had it not been for an unlikely quirk of history. Nevertheless, in my time working as a software engineer in San Francisco in the early 2010s, I repeatedly found myself at dive bars in SoMa or in the comments of HackerNews defending its most cumbersome design choices. ↫ Gabriel Nicholas at Wired I’ll just step back and let y’all handle this one.
One thing I love about Python is how it comes with its very own built-in zen. In moments of tribulations, when I am wrestling with crooked code and tangled thoughts, I often find solace in its timeless wisdom. ↫ Susam Pal I can’t program and know nothing about Python, but this still made me laugh.
Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing? When a house is being built, tons of people are involved: architects, civil engineers, plumbers, electricians, bricklayers, interior designers, roofers, surveyors, pavers, you name it. You don’t expect a single person, or even a whole single company, to be able to do all of those. ↫ Vitor M. de Sousa Pereira I’ve always found that software development gets a ton of special treatment and leeway in quality expectations, and this has allowed the kind of stuff the linked article is writing about to become the norm. Corporations can demand so much from developers and programmers to the point where expecting quality is wholly unreasonable, because there’s basically no consequences for delivering a shit product. Bugs, crashes, security issues, lack of documentation, horrid localisation – it’s all par for the course in software, yet we would not tolerate any of that in almost any other type of product. While I’m sure some of this can be attributed to developers themselves, most of it seems to stem from incompetent managers imposing impossible deadlines downwards and setting unrealistic expectations upwards – you know, kick down, lick up – creating a perfect storm of incompetence. We all know it, we all experience it every day, and we all hate it – but we’ve just accepted it. As consumers, as developers, as regulatory bodies. It’s too late to fix this now. Software development will forever exist as a sort of no man’s land of quality expectations, free from regulations, warranties, and consumer protections, and imposing them now after the fact is never going to be accepted by the industry and won’t ever make it through any lawmaking process of any country, and we all suffer from it, both as users of software and as makers of it.
To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage. By porting the current codebase, we expect to be able to preview a native implementation of tsc capable of command-line typechecking by mid-2025, with a feature-complete solution for project builds and a language service by the end of the year. ↫ Anders Hejlsberg It seems Microsoft is porting TypeScript to Go, and WILL eventually offer both “TypeScript (JS)” and “TypeScript (native)” alongside one another during a transition period. TypeScript 6.x will be the JavaScript-based one and will continue to be developed until TypeScript 7.0, the Go-one, is mature enough. During the 6.x release cycle, however, there will be breaking changes and deprecations in preparation for 7.0. Those are some serious performance improvements, but I’m sure quite a few projects are going to run into issues during the transition period. I hope for them that the 6.x branch remains maintained for long enough to reasonably get everyone on board the new Go version.
Bjarne Stroustrup, creator of C++, has issued a call for the C++ community to defend the programming language, which has been shunned by cybersecurity agencies and technical experts in recent years for its memory safety shortcomings. C and C++ are built around manual memory management, which can result in memory safety errors, such as out of bounds reads and writes, though both languages can be written and combined with tools and libraries to help minimize that risk. These sorts of bugs, when they do crop up, represent the majority of vulnerabilities in large codebases. ↫ Thomas Claburn at The Register I mean, it makes sense to me that those responsible for new code to use programming languages that more or less remove the most common class of vulnerabilities. With memory-safe languages like Rust having been around for quite a while now, it’s almost wilful negligence to write new code where security is a priority in anything but such memory-safe languages. Of course, this doesn’t mean you delete any and all existing code – it just means you really need to start writing any new code in safer languages. After all, research shows that even when you only write new code in memory-safe languages, the reduction in vulnerabilities is massive. This reminds me a lot of those old videos of people responding to then-new laws mandating the use of seat belts in cars. A lot of people didn’t want to put them on, saying things to the tune of “I don’t need one because I’m a good driver”. Even if you are a good driver – which statistically you aren’t – everyone else on the road isn’t. When we see those old videos now, they feel quaint, archaic, and dumb – of course you wear a seat belt, you’d be an irresponsible idiot not to! – but only a few decades ago, those arguments made perfect sense to people. It won’t be long before the same will apply to people doggedly refusing to use memory-safe languages or libraries/extensions that introduce such safety to existing languages, and Bjarne Stroustrup seems to understand that. Are you really smarter than Bjarne Stroustrup?
I’m sure we can all have a calm, rational discussion about this, so here it goes: zlib-rs, the Rust re-implementation of the zlib library, is now faster than its C counterparts in both decompression and compression. We’ve released version 0.4.2 of zlib-rs, featuring a number of substantial performance improvements. We are now (to our knowledge) the fastest api-compatible zlib implementation for decompression, and beat the competition in the most important compression cases too. ↫ Folkert de Vries As someone who isn’t a programmer, looking at all the controversies and fallout around anything related to Rust is both fascinating and worrying. Fascinating because Rust clearly brings a whole slew of improvements over established and older languages, and worrying because the backlash from the establishment has been wildly irrational and bordering on the childish, complete with tamper tantrums and the taking of balls and going home. It shouldn’t surprise me that people get attached to programming languages the same way people get attached to operating systems, but surprisingly, it still does. If Rust not only provides certain valuable benefits like memory safety, but can also be used to create implementations that are faster than those created in, say, C, it’s really only going to be a matter of time before it simply becomes an untenable position to block Rust from, say, the Linux kernel. Progress has a tendency to find a way, especially the more substantial the benefits get, and as studies show, even only writing new code in memory-safe languages provides substantial benefits. In other words, more and more projects will simply switch over to Rust for new code where it makes sense, whether Rust haters want it or not. There will be enough non-Rust code to write and maintain, though, so I don’t think people will be out of a job any time soon because they refuse to learn Rust, but to me as an outsider, the Rust hate seems to grow more and more irrational by the day.
Cassette is a GUI application framework written in C11, with a UI inspired by the cassette-futurism aesthetic. Built for modern POSIX systems, it’s made out of three libraries: CGUI, CCFG and COBJ. Cassette is free and open-source software, licensed under the LGPL-3.0. ↫ Cassette GitHub page Upon first reading this description, you might wonder what a “cassette-futurism aesthetic” really is, but once you take a look at the screenshots of what Cassette can do, you immediately understand what it means. It’s still in the alpha stage and there’s lot still to do, but what it has now is already something quite unique I don’t think the major toolkits really cater to or can even pull off. There’s an example application that’s focused on showing some system stats, and that’s exactly the kind of stuff this seems a great fit for: good-looking, small widget-like applications showing glanceable information.
Now, if you have been following the development of EndBASIC, this is not surprising. The defining characteristic of the EndBASIC console is that it’s hybrid as the video shows. What’s newsworthy, however, is that the EndBASIC console can now run directly on a framebuffer exposed by the kernel. No X11 nor Wayland in the picture (pun intended). But how? The answer lies in NetBSD’s flexible wscons framework, and this article dives into what it takes to render graphics on a standard Unix system. I’ve found this exercise exciting because, in the old days, graphics were trivial (mode 13h, anyone?) and, for many years now, computers use framebuffer-backed textual consoles. The kernel is obviously rendering “graphics” by drawing individual letters; so why can’t you, a user of the system, do so too? ↫ Julio Merino This opens up a lot of interesting use cases and fun hacks for developers to implement in their CLI applications. All the code in the article is – as usual – way over my head, but will be trivial for quite a few of you. The mentioned EndBASIC project, created by the author, Julio Merino, is fascinating too: EndBASIC is an interpreter for a BASIC-like language and is inspired by Amstrad’s Locomotive BASIC 1.1 and Microsoft’s QuickBASIC 4.5. Like the former, EndBASIC intends to provide an interactive environment that seamlessly merges coding with immediate visual feedback. Like the latter, EndBASIC offers higher-level programming constructs and strong typing. EndBASIC’s primary goal is to offer a simplified and restricted DOS-like environment to learn the foundations of programming and computing, and focuses on features that quickly reward the learner. These include a built-in text editor, commands to manipulate the screen, commands to interact with shared files, and even commands to interact with the hardware of a Raspberry Pi. ↫ EndBASIC website Being able to run this on a machine without having to load either X or Wayland is a huge boon, and makes it accessible fast on quite a lot of hardware on which a full X or Wayland setup would be cumbersome or slow.
Hey there! In this book, we’re going to build a small operating system from scratch, step by step. You might get intimidated when you hear OS or kernel development, the basic functions of an OS (especially the kernel) are surprisingly simple. Even Linux, which is often cited as a huge open-source software, was only 8,413 lines in version 0.01. Today’s Linux kernel is overwhelmingly large, but it started with a tiny codebase, just like your hobby project. We’ll implement basic context switching, paging, user mode, a command-line shell, a disk device driver, and file read/write operations in C. Sounds like a lot, however, it’s only 1,000 lines of code! ↫ Seiya Nuta It’s exactly what it says on the tin.
When we announced the security flaw CVE-2024-11053 on December 11, 2024 together with the release of curl 8.11.1 we fixed a security bug that was introduced in a curl release 9039 days ago. That is close to twenty-five years. The previous record holder was CVE-2022-35252 at 8729 days. ↫ Daniel Stenberg Ir’s really quite fascinating to see details like this about such a widepsread and widely used tool like curl. The bug in question was a logic error, which made Stenberg detail how any modern language like Rust, instead of C, would not have prevented this issue. Still, about 40% of all security issues in curl stem from not using a memory-safe language, or about 50% of all high/critical severity ones. I understand that jumping on every bandwagon and rewriting everything in a memory-safe language is a lot harder than it sounds, but I also feel like it’s getting harder and harder to keep justifying using old languages like C. I really don’t know why people get so incredibly upset at the cold, hard data about this. Anyway, the issue that sparked this post is fixed in curl 8.11.1.
Rejecting an engrained practice of bullshitting does not come easily. Frameworkism preaches that the way to improve user experiences is to adopt more (or different) tooling from the framework’s ecosystem. This provides adherents with something to do that looks plausibly like engineering, except it isn’t. It can even become a totalising commitment; solutions to user problems outside the framework’s expanded cinematic universe are unavailable to the frameworkist. Non-idiomatic patterns that unlock significant wins for users are bugs to be squashed. And without data or evidence to counterbalance bullshit artists’s assertions, who’s to say they’re wrong? Orthodoxy unmoored from measurements of user outcomes predictably spins into abstruse absurdities. Heresy, eventually, is perceived to carry heavy sanctions. It’s all nonsense. ↫ Alex Russell I’m not a developer, but any application that uses frameworks like React that I’ve ever used tend to be absolute trainwrecks when it comes to performance, usability, consistency, and platform integration. When someone claims to have an application available for a platform I use, but it’s using React or Electron or whatever, they’re lying in my eyes – what they really have is a website running in a window frame, which may or may not even be a native window frame. Developing using these tools indicates to me a lack of care, a lack of respect for the users of your product. I am militantly native. I’d rather use a less functional application than a Chrome web application cosplaying as a real application, and I will most likely not even consider using your service if all you have is a website-in-a-box. If you don’t respect me, I see no need to respect you. If you want an application on a specific platform, use that platform’s native tools and APIs to build it. Anything else tells me all I need to know about how much you truly care about the product you’re building.
This is the first post in what will hopefully become a series of posts about a virtual machine I’m developing as a hobby project called Bismuth. This post will touch on some of the design fundamentals and goals, with future posts going into more detail on each. But to explain how I got here I first have to tell you about Bismuth, the kernel. ↫ Eniko Fox It’s not every day the a developer of an awesome video game details a project they’re working on that also happens to be excellent material for OSNews. Eniko Fox, one of the developers of the recently released Kitsune Tails, has also been working on an operating system and virtual machine in her spare time, and has recently been detailing the experience in, well, more detail. This one here is the first article in the series, and a few days ago she published the second part about memory safety in the VM. The first article goes into the origins of the project, as well as the design goals for the virtual machine. It started out as an operating systems development side project, but once it was time to develop things like the MMU and virtual memory mapping, Fox started wondering if programs couldn’t simply run inside a virtual machine atop the kernel instead. This is how the actual Bismuth virtual machine was conceived. Fox wants the virtual machine to care about memory safety, and that’s what the second article goes into. Since the VM is written in C, which is anything but memory-safe, she’s opting for implementing a form of sandboxing – which also happens to be the point in the development story where my limited knowledge starts to fail me and things get a little too complicated for me. I can’t even internalise how links work in Markdown, after all (square or regular brackets first? Also Markdown sucks as a writing tool but that’s a story for another time). For those of you more capable than me – so basically most of you – Fox’ series is a great series to follow along as she further develops the Bismuth VM.
FLTK 1.4.0 has been released. This new version of the Fast Light Toolkit contains some major improvements, such as Wayland support on both Linux and FreeBSD. X11 and Wayland are both supported by default, and applications using FLTK will launch using Wayland if available, and otherwise fall back to starting with X11. This new release also brings HiDPI support on Linux and Windows, and improves said support on macOS. Those are the headline features, but there’s more changes here, of course, as well as the usual round of bugfixes. Right after the release of 1.4.0, a quick bugfix release, version 1.4.0-1, was released to address an issue in 1.4.0 – a build error on a single test program on Windows, when using Visual Studio. Not exactly a major bug, but great to see the team fix it so rapidly.
Update: that was quick! GitHub banned the “AI” company’s account. Only GitHub gets to spam AI on GitHub, thank you very much. Most of the time, products with “AI” features just elicit sighs, especially when the product category in question really doesn’t need to have anything to do with “AI” in any way, shape, or form. More often than not, though, such features are not only optional and easily ignorable, and we can always simply choose not to buy or use said products in the first place. I mean, over the last few days I’ve migrated my Pixel 8 Pro from stock Google Android to GrapheneOS as the final part of my platform transition away from big tech, and Google’s insistence on shoving “AI” into everything certainly helped in spurring this along. But what are you supposed to do if an “AI” product forces itself upon you? What if you can’t run away from it? What if, one day, you open your GitHub repository and see a bunch of useless PRs from an “AI” bot who claims to help you fix issues, without you asking it to do so? Well, that’s what’s happening to a bunch of GitHub users who were unpleasantly surprised to see garbage, useless merge requests from a random startup testing out some “AI” tool that attempts to automatically ‘fix’ open issues on GitHub. The proposed ‘fixes’ are accompanied by a disclaimer: Disclaimer: The commit was created by Latta AI and you should never copy paste this code before you check the correctness of generated code. Solution might not be complete, you should use this code as an inspiration only. This issue was tried to solve for free by Latta AI – https://latta.ai/ourmission If you no longer want Latta AI to attempt solving issues on your repository, you can block this account. ↫ Example of a public open issue with the “AI” spam Let me remind you: this tool, called “Latta AI”, is doing all of this unprompted, without consent, and the commits generally seem bogus and useless, too, in that they don’t actually fix any of the issues. To make matters worse, your GitHub repository will then automatically appear as part of its marketing – again without any consent or permission from the owners of the GitHub projects in question. Clicking through to the GitHub repositories listed on the front page will reveal a lot about how developers are responding: they’re not amused. Every link I clicked on had Latta AI’s commit and comment marked as spam, abuse, or just outright deleted. We’re talking public open issues here, so it’s not like developers aren’t looking for input and possible fixes from third parties – they just want that input and those possible fixes to come from real humans, not some jank code generator that’s making us destroy the planet even faster. This is what the future of “AI” really looks like. It’s going to make spam even easier to make, even more pervasive, and even cheaper, and it’s going to infest everything. Nothing will be safe from these monkeys on typewriters, and considering what the spread of misinformation by human-powered troll farms can do, I don’t think we’re remotely ready for what “AI” is going to mean for our society. I can assure you lying about brown people eating cats and dogs will be remembered as quaint before this nonsense is over.
Some months ago, I got really fed up with C. Like, I don’t hate C. Hating programming languages is silly. But it was way too much effort to do simple things like lists/hashmaps and other simple data structures and such. I decided to try this language called Odin, which is one of these “Better C” languages. And I ended up liking it so much that I moved my game Artificial Rage from C to Odin. Since Odin has support for Raylib too (like everything really), it was very easy to move things around. Here’s how it all went.. Well, what I remember the very least. ↫ Akseli Lahtinen You programmers might’ve thought you escaped the wrath of Monday on OSNews, but after putting the IT administrators to work in my previous post, it’s now time for you to get to work. If you have a C codebase and want to move it to something else, in this case Odin, Lahtinen’s article will send you on your way. As someone who barely knows how to write HTML, it’s difficult for me to say anything meaningful about the technical details, but I feel like there’s a lot of useful, first-hand info here.
As of the previous release of POSIX, the Austin Group gained more control over the specification, having it be more working group oriented, and they got to work making the POSIX specification more modern. POSIX 2024 is the first release that bears the fruits of this labor, and as such, the changes made to it are particularly interesting, as they will define the direction of the specification going forwards. This is what this article is about! Well, mostly. POSIX is composed of a couple of sections. Notably XBD (Base Definitions, which talk about things like what a file is, how regular expressions work, etc), XSH (System Interfaces, the C API that defines POSIX’s internals), and XCU (which defines the shell command language, and the standard utilities available for the system). There’s also XRAT, which explains the rationale of the authors, but it’s less relevant for our purposes today. XBD and XRAT are both interesting as context for XSH and XCU, but those are the real meat of the specification. This article will focus on the XCU section, in particular the utilities part of that section. If you’re more interested in the XSH section, there’s an excellent summary page by sortix’s Jonas Termansen that you can read here. ↫ im tosti The weekend isn’t over yet, so here’s some more light reading.
I want to take advantage of Go’s concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD, Same Instruction Muliple Data . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go’s compiler does not utilizes simd, and I cound not find a general propose simd package that I liked. I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? ↫ Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don’t fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.
In today’s world, everything is turning digital: manufacturing, retail, and agriculture. The global digital transformation market is set to reach a worth of $1009.8 billion by 2025, according to a report from Grand View Research, and this is one of the many reasons why technology has turned out to be the go-to method for streamlining operations, creating efficiency, and unlocking new possibilities. Development teams-specialized groups of tech talents-are at the heart of this transformation, moving material digitisation forward. Their influence is experienced across many industries, redefining how firms approach innovation, sustainability, and customer interaction. The Role of Dedicated Development Teams in Material Digitization The consistency, expertise, and focus that dedicated development teams can bring often provide the necessary impetus for an in-depth tackle of these complexities of material digitisation. It is not all about coding; in fact, it is about teams made up of project managers, analysts, engineers, and designers who integrate digital technologies into material handling and processing. Why a Dedicated Team? Choosing a dedicated team model for digitisation projects offers several advantages: Driving Innovation and Efficiency Dedicated development teams have been making revolutionary contributions to material digitisation. They digitise conventional materials and, in the process, create completely new avenues for innovation and efficiency in handling them. Case Studies of Success Navigating Challenges Together Of course, material digitisation comes with its problems. Data security, integration into existing systems, and the guarantee of actual-to-life digital material representation are specific difficulties facing most committed development teams. Partnering with an it outstaffing company can enhance their skill and teamwork, contributing to overcoming these setbacks. Overcoming Data Security Concerns Among the most critical issues in any digitisation project is data security. This develops dedicated teams with solid measures for protection, including encryption and secure access controls to digital materials. Additionally, regular audits of updates are needed in security to locate weaknesses that emerging threats could use. By prioritizing data security, organizations earn user trust and ensure the conduction of their services according to regulatory standards. Seamless Integration With Existing Systems Similarly, dedicated teams work at seamlessly integrating these into existing systems so that the digital materials can be put to practical use. In most cases, this demands bespoke API development or middleware solutions that will make the data flow across platforms smooth and unhindered. Rigorous testing and validation are thus required to establish that all systems communicate effectively and that data integrity is not compromised. Here, integration means increased productivity and an enhanced ability on the part of users to apply digital resources more usefully. The Multifaceted Benefits of Material Digitization However, dedicated development teams touch material digitisation well beyond operational efficiencies, driving it toward greener pastures and personalisation. Sustainability Through Digitization By digitizing materials, companies can reduce waste and optimize resources. For example, digital inventory systems prevent overproduction and excess inventory through efficient demand forecasting. This helps not only the environment but also the company’s bottom line. Besides, real-time data analytics enable organizations to make more informed decisions and respond promptly to various changes in markets and industries. Being sustainable in practice would enable companies to remain competitive in their respective industries. Enhancing Customer Engagement Material digitisation also opens up several new opportunities related to customer experiences. New immersive experiences offered by VR and AR enable the customer to try out a product virtually before buying it. Not only will this improve the buying experience, but it will also help develop a better brand relationship. Moreover, personalized experiences can also be built based on user preference, which genuinely makes a customer feel unique and understood. Hence, businesses can create customer loyalty and reinforce purchases by offering memorable and unique interactions. The Road Ahead: Collaborating for a Digitized Future Material digitisation is an ongoing journey full of potential and challenges. Companies need to continue their exploration, as the role of dedicated development teams will become much more important. Specialized teams are not simple service providers but strategic partners in innovation that help businesses navigate the complexities of the digital landscape. A Collaborative Ecosystem The digitisation of materials needs an ecosystem approach in which businesses, developers, and even end-users will work together. Encouraging open communication, feedback, and co-innovation leads to more practical digitisation solutions. For continuous improvement, various forms of partnership across different sectors will facilitate stakeholders’ use of diversified experience and insight. This collaborative approach accelerates the development of new technologies and ensures solutions that fit real user needs. Staying Ahead of the Curve Keeping one’s head above water is only possible with continuous learning and adaptation in a continuously changing digital world. The development teams should continually explore new technologies, methodologies, and practices to ensure that the digitisation of materials meets current needs but will also address future trends and opportunities. This allows teams to be more proactive in introducing innovative solutions that maximize efficiency and improve the user experience. With a culture of continuous improvement, organizations will be in leadership positions in their industry and prepared for whatever complications arise from the ever-changing digital landscape. Conclusion The influence of dedicated development teams goes deep and wide in material digitization. Pledged to expertise, innovation, and a perspective for the future, they are fostering industries down the value chain to unlock new potentials, efficiency, and sustainability while making the customer experience more engaging. No doubt, this team and business collaboration will form a cornerstone of this journey in digital transformation as it pertains to the way we interact with materials in our everyday lives.
A YouTube channel has resurrected a programming language that hadn’t been seen since the 1980s — in a testament to both the enduring power of our technology, and of the communities that care about it. But best of all, Simpson uploaded the language to the Internet Archive, along with all his support materials, inviting his viewers to write their own programs (and saying he hoped his upstairs neighbor would’ve approved). And in our email interview, Simpson said since then it’s already been downloaded over 1,000 times — “which is pretty amazing for something so old.” ↫ David Cassel It’s great that this lost programming language, MicroText for the Commodore 64, was rediscovered, but I’m a bit confused as to how “lost” this language really was. I mean, it was “discovered” in a properly listed eBay listing, which feels like cheating to me. When I think of stories of discoveries of long-lost software, games, or media, it usually involves things like finding it in a shed after years of searching, or someone at a company going through that box of old hard drives discovering the game they worked on 32 years ago. I don’t know, something about this whole story feels off to me, and it’s ringing some alarm bells I can’t quite place. Regardless, it’s cool to have MicroText readily available on the web now, so that people can rediscover it and create awesome new things with it. Perhaps there’s old ideas to be relearned here.
Tcl 9.0 and Tk 9.0 – usually lumped together as Tcl/Tk – have been released. Tcl 9.0 brings 64bit compatibility so it can address data values larger than 2 GB, better Unicode support, support for mounting ZIP files as file systems, and much, much more. Tk 9.0 gets support for scalable vector graphics, much better platform integration with things like system trays, gestures, and so on, and much more.