For nearly 15 years, FreeBSD has been at the core of my personal infrastructure, and my passion for it has only grown over time. As a die-hard fan, I’ve stuck with BSD-based systems because they continue to deliver exactly what I need—storage, networking, and security—without missing a beat. The features I initially fell in love with, like ZFS, jails, and pf, are still rock-solid and irreplaceable. There’s no need to overhaul them, and in many ways, that reliability is what keeps me hooked. My scripts from 20 years ago still work, and that’s a rare kind of stability that few platforms can boast. It’s not just me, either—big names like Netflix, Microsoft, and NetApp, alongside companies like Tailscale and AMD, continue to support FreeBSD, further reinforcing my belief in its strength and longevity (you can find the donators and sponsors right here). Yet, while this familiarity is comforting, it’s becoming clear that FreeBSD must evolve to keep pace with the modern landscape of computing. ↫ gyptazy It’s good to read so many articles and comments from long-time FreeBSD users and contributors who seem to recognise that there’s a real opportunity for FreeBSD to become more than ‘just’ a solid server operating system. This aligns neatly with FreeBSD itself recognising this, too, and investing in improving the operating system’s support for what are not considered basic laptop features like touchpad gestures and advanced sleep states, among other things. I’ve long held the belief that the BSDs are far closer to attracting a wider, more general computing-focused audience than even they themselves sometimes seem to think. There’s a real, tangible benefit to the way BSDs are developed and structured – a base system developed by one team – compared to the Linux world, and there’s enough disgruntlement among especially longtime Linux users about things like Wayland and systemd that there’s a pool of potential users to attract that didn’t exist only a few years ago. If you’re a little unsure about the future of Linux – give one of the BSDs a try. There’s a real chance you’ll love it.
In case you missed it at the 2024 Samsung Developer Conference today, our partners at Samsung Visual Display discussed the work they have been doing to port the Tizen operating system to RISC-V. Tizen is an open-source operating system (OS) that is used in many Samsung smart T.V.s and it makes sense that they would look to the fast growing, global open-standard RISC-V to develop future systems. The presentation showed the results of efforts at both companies to expand the capabilities of the already robust Tizen approach. At the event they also demonstrated a T.V. running on RISC-V and using a SiFive Performance P470 based core. ↫ John Ronco The announcement is sparse on details, and there isn’t much more to add than this, but the reality is that of course Samsung was going to port Tizen to RISC-V. The growing architecture is bound to compete with the industry standard ARM in a variety of market segments, and it makes perfect sense to have your TV and other (what we used to call) embedded operating systems ready to go.
Hot on the heels of releasing Redox 0.9.0, the team is back with yet another monthly update. Understandably, it’s not as massive of an update as other months, but there’s still more than enough here. There’s the usual bug fixes and small changes, but also more work on the port to RISC-V, the QEMU port (as in, running QEMU on Redox), a bunch of improvements to Relibc, and a lot more.
Windows 11 2024 Update, also known as version 24H2, is now publicly available. Microsoft announced the rollout alongside the new AI-powered features that are coming soon to Windows Insiders with Copilot+ PCs and Copilot upgrades. Unlike recent Windows 11 updates, version 24H2 is a “full operating system swap,” so updating to it will take more time than usual. What is going as usual is the way the update is being offered to users. Microsoft is gradually rolling out the update to “seekers” with Windows 11 versions 22H2 and 23H2. That means you need to go to the Settings app and manually request the update. ↫ Taras Buria at Neowin I’ve said it a few times before but I completely lost track of how Windows releases and updates work at this point. I thought this version and its features had been available for ages already, but apparently I was wrong, and it’s only being released now. For now, you can get it by opting in through Windows Update, while the update will be pushed to everyone later on. I really wish Microsoft would move to a simpler, more straightforward release model and cadence, but alas. Anyway, this version brings all the AI/ML CoPilot stuff, WiFi 7 support, improvements to File Explorer and the system tray, the addition of the sudo command, and more. The changes to Explorer are kind of hilarious to me, as Microsoft seems to have finally figured out labels are a good thing – the weird copy/cut/paste buttons in the context menu have labels now – but this enhanced context menu still has its own context menu. Explorer now also comes with support for more compression formats, which is a welcome change in 2007. To gain access to the new sudo command, go to Settings > System > For developers and enable the option. For the rest, this isn’t a very impactful release, and will do little to convince the much larger Windows 10 userbase to switch to Windows 11, something that’s going to be a real problem for Microsoft in the coming year.
In 1999, some members from the MMC Association decided to split and create SD Association. But nobody seems to exactly know why. ↫ sdomi’s webpage I don’t even know how to summarise any of this research, because it’s not only a lot of information, it’s also deeply bureaucratic and boring – it takes a certain kind of person to enjoy this sort of stuff, and I happen to fit the bill. This is a great read.
Succeeding in trading usually depends on knowing market trends and understanding price changes. Data-driven decisions also play a big part. Exploring forex trading for beginners might be difficult, but technical analysis is a powerful tool to ease the work. It can significantly boost trading performance. This article explains how traders can make technical analyses to make better decisions. Traders can have a higher chance of success in the fast-paced forex market by looking at patterns and knowing key indicators. Understanding the Basics of Technical AnalysisTechnical analysis is a way to evaluate financial markets. It does this by using past price data and volume. Instead of considering economic or social influences, it focuses on chart patterns and indicators. These patterns help in predicting future price changes. This approach works well in forex trading since currency markets are highly liquid and often follow trends. For beginners, technical analysis can reduce the complexity of forex trading. Traders focus on chart patterns, candlestick shapes, and key indicators. This method offers a structured way to view the market. It helps new traders spot profitable opportunities with more ease. Key Technical Indicators to KnowIndicators are essential in technical analysis. They give insights into price trends and market energy. A popular indicator is the Moving Average. This indicator smooths out price data over a period. Moving averages help traders spot trends and choose entry and exit points. They also filter out unnecessary price noise. Another important indicator is the Relative Strength Index (RSI). RSI measures how fast and intense price movements are. When RSI hits 70 or more, it may show a currency is overbought. But if it reads below 30, the currency could be oversold. This might be a good buying opportunity. Using Candlestick Patterns to Understand Market SentimentCandlestick patterns are commonly used in technical analysis. They show market sentiment and possible price changes. A candlestick chart displays the open, close, high, and low prices within a set time. It simplifies the process of spotting price fluctuations. Patterns like Hammer, Engulfing, and Doji give hints about trends and reversals. For instance, a Hammer pattern has a long lower shadow and a small body. It often shows up at the end of a downtrend. This signals a possible reversal. The Engulfing pattern appears when a smaller candle is followed by a larger one. It indicates a strong shift in sentiment. Reading these patterns can help traders understand the emotions driving price moves. This knowledge leads to better trading choices. The Importance of Support and Resistance LevelsSupport and resistance levels are crucial in technical analysis. They mark price points where the market struggles to break through. Support levels act like a “floor,” where a currency’s price tends to stop falling and might bounce back up. Resistance levels, however, act as a “ceiling” where prices often can’t go any higher. Traders use these levels to decide entry and exit points. For example, when a currency pair’s price nears a support level, it could be a buying chance. Traders might expect a rebound. Knowing these levels helps traders manage risk and make smart decisions. Using Moving Averages to Filter Market NoiseMoving averages are helpful indicators. They smooth out price data to reveal trends. The Simple Moving Average (SMA) and Exponential Moving Average (EMA) are widely used. SMAs calculate an average price over a certain period. EMAs give more weight to recent prices, so they react faster to new market data. Moving averages help beginners determine if a currency is moving higher or lower. If a short-term moving average goes over a long-term one, it could indicate a chance to buy. Using moving averages simplifies the decision-making process. They display patterns without being influenced by brief price fluctuations. Through the utilisation of moving averages, traders are able to minimise the impact of market fluctuations, resulting in improved trend recognition and decision-making capabilities. How Technical Analysis Builds Trading ConfidenceTechnical analysis isn’t just a strategy; it boosts confidence. By using data and known indicators, traders avoid impulsive choices based on market emotions. A disciplined approach with technical analysis fosters consistency. It lets traders follow their plans, even when the market shifts. Technical analysis, if utilized correctly, decreases risks and enables traders to identify favorable trading chances. This structured method is advantageous for newcomers who could be daunted by the intricacies of the market. Over time, the trading performance gets better with the help of technical analysis. It further assists traders in obtaining a more profound comprehension of market behaviour. Technical analysis offers useful instruments for enhancing trading outcomes. Using indicators, candlestick patterns, trend lines, and support and resistance levels, traders have the ability to make educated decisions in the forex market. This methodical strategy helps newcomers identify patterns, manage potential dangers, and develop self-assurance. Through incorporating technical analysis into their trading approach, traders enhance their likelihood of success in the competitive forex market.
FreeBSD is going to take its desktop use quite a bit more seriously going forward. FreeBSD has long been a top choice for IT professionals and organizations focused on servers and networking, and it is known for its unmatched stability, performance, and security. However, as technology evolves, FreeBSD faces a significant challenge: supporting modern laptops. To address this, the FreeBSD Foundation and Quantum Leap Research has committed $750,000 to improve laptop support, a strategic investment that will be pivotal in FreeBSD’s future. ↫ FreeBSD Foundation blog So, what are they going to spend this big bag of money on? Well, exactly the kind of things you expect. They want to improve and broaden support for various wireless chipsets, add support for modern powersaving processor states, and make sure laptop-specific features like touchpad gestures, specialty buttons, and so on, work properly. On top of that, they want to invest in better graphics driver support for Intel and AMD, as well as make it more seamless to switch between various audio devices, which is especially crucial on laptops where people might reasonably be expected to use headphones. In addition, while not specifically related to laptops, FreeBSD also intends to invest in support for heterogeneous cores in its scheduler and improvements to the bhyve hypervisor. Virtualisation is, of course, not just something for large desktops and servers, but also laptop users might turn to for certain tasks and workloads. The FreeBSD project will be working not just with Quantum Leap Research, but also various hardware makers to assist in bringing FreeBSD’s laptop support to a more modern, plug-and-play state. Additionally, the mentioned cash injection is not set in stone; additional contributions from both individuals and larger organisations are obviously welcome, and of course if you can contribute code, bug reports, documentation, and so on, you’re also more than welcome to jump in.
Recently I came across a minor mystery—the model numbers of the original IBM PC. For such a pivotal product, there is remarkably little detailed original information from the early days. ↫ Michal Necasek Count me surprised. When I think IBM, I think meticulously documented and detailed bureaucracy, where every screw, nut, and bolt is numbered, documented, and tracked, so much so in fact this all-American company even managed to impress the Germans. You’d expect IBM, of all companies, to have overly detailed lists of every IBM PC it ever designed, manufactured, and sold, but as it turns out, it’s actually quite hard to assemble a complete list of the early IBM PCs the company sold. The biggest problem are the models from before 1983, since before that year, the IBM PC does not appear in IBM’s detailed archive of announcements. As such, Michal Necasek had to dig into random bits of IBM documentation to assemble references to those earlier models, and while he certainly didn’t find every single one of them, it’s a great start, and others can surely pick up the search from here.
When Valve took its second major crack at making Steam machines happen, in the form of the Steam Deck, one of the big surprises was the company’s choice to base the Linux operating system the Steam Deck uses on Arch Linux, instead of the Debian base it was using before. It seems this choice is not only benefiting Valve, but also Arch. We are excited to announce that Arch Linux is entering into a direct collaboration with Valve. Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers. ↫ Levente Polyak This is great news for Arch, but of course, also for Linux in general. The work distributions do to improve their user experience tend to be picked up by other distributions, and it’s clear that Valve’s contributions have been vast. With these collaborations, Valve is also showing it’s in it for the long term, and not just interested in taking from the community, but also in giving, which is good news for the large number of people now using Linux for gaming. The Arch team highlights that these projects will follow the regular administrative and decision-making processes within the distribution, so we’re not looking at parallel efforts forced upon everyone else without a say.
California Governor Gavin Newsom has signed a law (AB 2426) to combat “disappearing” purchases of digital games, movies, music, and ebooks. The legislation will force digital storefronts to tell customers they’re just getting a license to use the digital media, rather than suggesting they actually own it. When the law comes into effect next year, it will ban digital storefronts from using terms like “buy” or “purchase,” unless they inform customers that they’re not getting unrestricted access to whatever they’re buying. Storefronts will have to tell customers they’re getting a license that can be revoked as well as provide a list of all the restrictions that come along with it. Companies that break the rule could be fined for false advertising. ↫ Emma Roth at The Verge A step in the right direction, but a lot more is definitely needed. This law in particular seems to leave a lot of wiggle room for companies to keep using the “purchase” term while hiding the disclosure somewhere in the very, very small fine print. I would much rather a law like this just straight up ban the use of the term “purchase” and similar terms when all you’re getting is a license. Why allow them to keep lying about the nature of the transaction in exchange for some fine print somewhere? The software industry in particular has been enjoying a free ride when it comes to consumer protection laws, and the kind of malpractice, lack of accountability, and laughable quality control would have any other industry shut down in weeks for severe negligence. We’re taking baby steps, but it seems we’re finally arriving at a point where basic consumer protection laws and rights are being applied to software, too. Several decades too late, but at least it’s something.
A cart and payment process is a critical yet often overlooked part of the user journey that can make or break an ecommerce app. From the outset, cart design, user experience and flexible payment options should be at the top of the agenda for digital brands wanting to drive conversions. Understanding User Intent When users add items to their carts, they have shown a clear intent to purchase. To complete that transaction, the following checkout process needs to be as easy and seamless for them as possible. Higher abandonment rates occur due to unnecessary friction caused by a confusing interface, complicated payment flows, and the absence of preferred payment methods, among other things. Studies show that 76% of online shopping carts are eventually abandoned, and clunky checkout design is a big part of this. The cart and payment section is the last step in persuading users to buy. An optimized experience directly correlates with higher conversion rates and more revenue. Key Aspects to Optimize There are 3 key aspects of the cart and checkout process that need to be optimized for conversion-focused brands: 1. Cart Design and User Experience The cart should provide a simple, visual summary of items added for purchase along with quantity selected and total order value. Allowing users to easily edit item properties, apply discounts, and estimate shipping simplifies what can be an anxiety inducing process, especially on mobile. Advanced features like saved carts for returning users further facilitate purchases. Offering guest checkout alongside account creation streamlines the process for first-time customers. 2. Flexible Payment Options Research shows that cart abandonment is reduced on sites that offer preferred payment methods. The more payment modes you enable, the higher the chances that users will find an option they trust and feel comfortable with. Major credit cards, mobile wallets, Buy Now Pay Later schemes, bank transfers, and other must have options; location based popular payment methods like Sofort, iDeal if you are selling across geographies. PCI-compliant integration with payment gateways such as Stripe and PayPal unlocks multiple payment methods while additionally ensuring transaction and sensitive user data security. Discover how to add payment gateway in app to enhance the payment capabilities of your mobile app. 3. Testing and Optimization No cart experience is perfect out of the box. Running A/B tests by tweaking design elements, flows, payment options, etc., provides data-backed insights on what users respond best to. Tools like Hotjar record user sessions directly in your live cart which surfaces pain points that can then be fixed. Analytics dashboards reveal drop off rates at each step, average order values and other trends that indicate scope for improvement if benchmarked periodically. Examples of Brands with Great Cart Experiences Some standout examples of brands that ace the cart and payment process: 1. Made.com Made.com offers a clean, distraction free cart with focus on only relevant details like items added, shipping estimates, order total, discounts and gift cards applied. Purchasing without account creation can be done through their guest checkout and the option to save details for faster repeat orders. At checkout, there are multiple payment methods clearly presented, along with clear messaging around security and returns policy – both essential to gain user trust for a furniture brand. 2. Bolt.com Bolt presents users with a single-page visual cart that provides details of services (food, rides, etc.) along with associated quantities, pricing and taxes. Pre-added tips can be edited before seamlessly checking out via integrated payment partner Stripe. Discounts and promo codes can also be applied directly on this page. The cart is optimized for speed, which is in line with Bolt’s brand promise of efficient deliveries and payments. 3. Amazon Amazon offers the gold standard for guided cart experiences, with persistent visibility into items added for purchase, alerts on discounts and delivery estimates. Their patented one-click buying option removes friction, allowing power users to skip checkout. However, multiple payment methods, including COD and EMI schemes, make it accessible for first-time buyers, too. The entire purchase process is geared towards user convenience, distilled through decades of testing and user data. Designing a Cart Experience from Scratch Creating an effective cart experience requires understanding user psychology, buyer journeys, and an iterative design approach. Here is a step-by-step process to follow: 1. Define Goals and Outcomes First, define what a successful cart and checkout flow needs to achieve from a business point of view. Typical goals include: So that design choices align with business impact and tie these to overall revenue and growth goals. 2. Map the Existing User Journey Analyze data around existing user behavior across the checkout process, e.g.: The above can be gleaned from tools such as Google Analytics and Hotjar. 3. Competitor Benchmarking Look at how competitor brands within your industry offer study cart experiences. Find out what flows, or features appeal most to users. For example, they can provide guest checkout or Apple/Google Pay for mobility apps or BNPL options for D2C brands. The right cart design combines learnings from data and real-world behavior. 4. Create and Test Hypotheses Using what has been researched so far, imagine what cart element and flow changes could positively impact your goals, e.g. Once you’ve tested these hypotheses with real users through interviews or prototypes, roll them out globally. Tools like UserTesting.com can be used to do quick user studies to find feedback. Choosing the Right Payment Partner To allow flexible payment options, though, one must interface with a payment service provider. Here are key aspects to evaluate when choosing a payment partner: The partner should advocate consistent checkout integration among several platforms, including web, mobile apps, POS systems, etc. For businesses selling cross-border, the partner must offer payment acceptance using local methods in 100+ markets, multi-currency processing and DCC. Using 3D security, risk-based analysis, artificial intelligence, etc., payment partners guard transactions. You should choose a mate with advanced competence in this field. The partner should support integration with credit cards, the most popular mobile wallets, UPI, BNPL schemes and
System76, the premiere Linux computer manufacturer and creator of the COSMIC desktop environment, has updated COSMIC’s Alpha release to Alpha 2. The latest release includes more Settings pages, the bulk of functionality for COSMIC Files, highly requested window management features, and considerable infrastructure work for screen reader support, as well as some notable bug fixes. ↫ system76’s blog The pace of development for COSMIC remains solid, even after the first alpha release. This second alpha keeps adding a lot of things considered basic for any desktop environment, such as settings panels for power and battery, sounds, displays, and many more. It also brings window management support for focus follows cursor and cursor follows focus, which will surely please the very specific, small slice of people who swear by those. Also, you can now disable the super key. A major new feature that I’m personally very happy about is the “adjust density” feature. COSMIC will allow you to adjust the spacing between the various user interface elements so you can choose to squeeze more information on your screen, which is one of the major complaints I have about modern UI design in macOS, Windows, and GNOME. Being able to adjust this to your liking is incredibly welcome, especially combined with COSMIC’s ability to change from ’rounded’ UI elements to ‘square’ UI elements. The file manager has also been vastly, vastly improved, tons of bugs were fixed, and much, much more. It seems COSMIC is on the right path, and I can’t wait to try out the first final result once it lands.
Tcl 9.0 and Tk 9.0 – usually lumped together as Tcl/Tk – have been released. Tcl 9.0 brings 64bit compatibility so it can address data values larger than 2 GB, better Unicode support, support for mounting ZIP files as file systems, and much, much more. Tk 9.0 gets support for scalable vector graphics, much better platform integration with things like system trays, gestures, and so on, and much more.
The world of software development is rapidly changing. More and more companies are adopting DevOps practices to improve collaboration, increase deployment frequency, and deliver higher-quality software. However, implementing DevOps can be challenging without the right people, processes, and tools. This is where DevOps managed services providers can help. Choosing the right DevOps partner is crucial to maximizing DevOps’s benefits at your organization. This comprehensive guide covers everything you need to know about selecting the best DevOps managed services provider for your needs. What are DevOps Managed Services? DevOps managed services provide ongoing management, support, and expertise to help organizations implement DevOps practices. A managed services provider (MSP) becomes an extension of your team, handling tasks like: This removes the burden of building in-house DevOps competency. It lets your engineers focus on delivering business value instead of struggling with new tools and processes. Benefits of Using DevOps Managed Services Here are some of the main reasons to leverage an MSP to assist your DevOps transformation: Accelerate Time-to-Market A mature MSP has developed accelerators and blueprints based on years of project experience. This allows them to rapidly stand up CI/CD pipelines, infrastructure, and other solutions. You’ll be able to deploy code faster. Increase Efficiency MSPs scale across clients, allowing them to create reusable frameworks, scripts, and integrations for data warehouse services, for example. By leveraging this pooled knowledge, you avoid “reinventing the wheel,” which gets your team more done. Augment Internal Capabilities Most IT teams struggle to hire DevOps talent. Engaging an MSP gives you instant access to specialized skills like site reliability engineering (SRE), security hardening, and compliance automation. Gain Expertise Most companies are still learning DevOps. An MSP provides advisory services based on what works well across its broad client base, helping you adopt best practices instead of making mistakes. Reduce Cost While the exact savings will vary, research shows DevOps and managed services can reduce costs through fewer defects, improved efficiency, and optimized infrastructure usage. Key Factors to Consider Choosing the right MSP gives you the greatest chance of success. However, evaluating providers can seem overwhelming, given the diversity of services available. Here are the 5 criteria to focus on: 1. DevOps Experience and Maturity Confirm that the provider has real-world expertise, specifically in DevOps engagements. Ask questions such as: They can guide your organization on the DevOps journey if you want confidence. Also, examine their internal DevOps maturity. An MSP that “walks the talk” by using DevOps practices in their operations is better positioned to help instill those disciplines in your teams. 2. People, Process, and Tools A quality MSP considers all three pillars of DevOps success: People – They have strong technical talent in place and provide training to address any skill gaps. Cultural change is considered part of any engagement. Process – They enforce proven frameworks for infrastructure management, CI/CD, metrics gathering, etc. But also customize it to your environment vs. taking a one-size-fits-all approach. Tools – They have preferred platforms and toolchains based on experience. But integrate well with your existing investments vs. demanding wholesale changes. Aligning an MSP across people, processes, and tools ensures a smooth partnership. 3. Delivery Model and Location Understand how the MSP prefers to deliver services: If you have on-site personnel, also consider geographic proximity. An MSP with a delivery center nearby can rotate staff more easily. Most MSPs are flexible to align with what works best for a client. Be clear on communication and availability expectations upfront. 4. Security and Compliance Expertise Today, DevOps and security should go hand-in-hand. Evaluate how much security knowledge the provider brings to the table. Relevant capabilities can include: Not all clients require advanced security skills. However, given increasing regulatory demands, an MSP that offers broader experience can provide long-term value. 5. Cloud vs On-Premises Support Many DevOps initiatives – particularly when starting – focus on the public cloud, given cloud platforms’ automation capabilities. However, most enterprises take a hybrid approach, leveraging both on-premises and public cloud. Be clear if you need an MSP able to support: The required mix of cloud vs. on-prem support should factor into provider selection. Engagement Models for DevOps Managed Services MSPs offer varying ways clients can procure their DevOps expertise: Staff Augmentation Add skilled DevOps consultants to your team for a fixed time period (typically 3-6 months). This works well to fill immediate talent gaps. Project Based Engage an MSP for a specific initiative, such as building a CI/CD pipeline for a business-critical application. Clear the scope and deliverables. Ongoing Managed Services Retain an MSP to provide ongoing DevOps support under a longer-term (1+ year) contract. More strategic partnerships where MSP metrics and incentives align with client goals. Hybrid Approaches Blend staff augmentation, project work, and managed services. Provides flexibility to get quick wins while building long-term capabilities. Evaluate which model (or combination) suits your requirements and budget. Overview of Top Managed Service Providers The market for DevOps-managed services features a wide range of global systems integrators, niche specialists, regional firms, and digital transformation agencies. Here is a sampling of leading options across various categories: Langate Accenture Cognizant Wipro EPAM Advanced Technology Consulting ClearScale This sampling shows the diversity of options and demonstrates key commonalities, such as automation skills, CI/CD expertise, and experience driving cultural change. As you evaluate providers, develop a shortlist of 2-3 options that seem best aligned. Then, further validation will be made through detailed discovery conversations and proposal walkthroughs. A Framework for Comparing Providers With so many aspects to examine, it helps to use a scorecard to track your assessment as you engage potential DevOps MSPs: Criteria Weight Provider 1 Provider 2 Provider 3 Years of Experience 10% Client References/Case Studies 15% Delivery Locations 10% Cultural Change Methodology 15% Security and Compliance Capabilities 10% Public Cloud Skills 15% On-Premises Infrastructure Expertise 15% Budget Fit 10% Total Score 100% Customize categories and weighting based on your priorities. Scoring forces clearer decisions compared to general impressions. Share the framework with stakeholders to build consensus on the
Just want to let y’all know that my family and I have been hit hard with bronchitis these past two weeks, and especially my recovery is going quite slowly (our kids are healthy again, and my wife is recovering quite well!). As such, I haven’t been able to do much OSNews work. I hope things will finally clear up a bit over the weekend so that I can resume normal service come Monday. Enjoy your weekend, y’all!
The push towards memory safe programming languages is strong, and for good reason. However, especially for bigger projects with a lot of code that potentially needs to be rewritten or replaced, you might question if all the effort is even worth it, particularly if all the main contributors would also need to be retrained. Well, it turns out that merely just focusing on writing new code in a memory safe language will drastically reduce the number of memory safety issues in a project as a whole. Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages. This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective. ↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog In this blog post, Google highlights that even if you only write new code in a memory-safe language, while only applying bug fixes to old code, the number of memory safety issues will decreases rapidly, even when the total amount of code written in unsafe languages increases. This is because vulnerabilities decay exponentially – in other words, the older the code, the fewer vulnerabilities it’ll have. In Android, for instance, using this approach, the percentage of memory safety vulnerabilities dropped from 76% to 24% over 6 years, which is a great result and something quite tangible. Despite the majority of code still being unsafe (but, crucially, getting progressively older), we’re seeing a large and continued decline in memory safety vulnerabilities. The results align with what we simulated above, and are even better, potentially as a result of our parallel efforts to improve the safety of our memory unsafe code. We first reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping. ↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog What this shows is that a large project, like, say, the Linux kernel, for no particular reason whatsoever, doesn’t need to replace all of its code with, say, Rust, again, for no particular reason whatsoever, to reap the benefits of a modern, memory-safe language. Even by focusing on memory-safe languages only for new code, you will still exponentially reduce the number of memory safety vulnerabilities. This is not a new discovery, as it’s something observed and confirmed many times before, and it makes intuitive sense, too; older code has had more time to mature.
The other day a friend asked me a pretty interesting question: what happened to all those companies who made those Japanese computer platforms that were never released outside Japan? I thought it’d be worth expanding that answer into a full-size post. ↫ Misty De Meo Japan had a number of computer makers that sold platforms that looked and felt like western PCs, but were actually quite different hardware-wise, and incompatible with the IBM PC. None of these exist anymore today, and the reason is simple: Windows 95. The Japanese platforms compatible enough with the IBM PC that they could get a Windows 95 port turned into a commodity with little to distinguish them from regular IBM PCs, and the odd platform that didn’t use an x86 chip at all – like the X68000 – didn’t get a Windows port and thus just died off. The one platform mentioned in this article that I had never heard of was FM Towns, made by Fujitsu, which had its own graphical operating system called Towns OS. The FM Towns machines and the Towns OS were notable and unique at the time in that it was the first operating system to boot from CD-ROM, and it just so happens that Joe Groff published an article earlier this year detailing this boot process, including a custom bootable image he made. Here in the west we mostly tend to remember the PC-98 and X86000 platforms for their gaming catalogs and stunning designs, but that’s like only remembering the IBM PC for its own gaming catalog. These machines weren’t just glorified game consoles – they were full-fledged desktop computers used for the same boring work stuff we used the IBM PC for, and it truly makes me sad I don’t speak a single character of Japanese, so a unique operating system like Towns OS will always remain a curiosity for me.
Our favorite operating system is now changing the default shell (ksh) to enforce not allowing invalid NUL characters in input that will be parsed as parts of the script. ↫ Undeadly.org As someone who doesn’t deal with stuff like this – I rarely actively use shell scripts – it seems kind of insane to me that this wasn’t the norm since the beginning.
As part of our vision for simplified Windows management from the cloud, Microsoft has announced deprecation of Windows Server Update Services (WSUS). Specifically, this means that we are no longer investing in new capabilities, nor are we accepting new feature requests for WSUS. However, we are preserving current functionality and will continue to publish updates through the WSUS channel. We will also support any content already published through the WSUS channel. ↫ Nir Froimovici What an odd feature to deprecate. Anyone with a large enough fleet of machines probably makes use of Windows Server Update Services, as it adds some much-needed centralised control to the downloading and deployment of Windows updates, so you can do localised partial rollouts for testing, which, as the CrowdStrike debacle showed us once more, is quite important. WSUS also happens to be a local tool, that is set up and run locally, instead of in the cloud, and that’s where we get to the real reason WSUS is being deprecated. Microsoft is advising IT managers who use WSUS to switch to Microsoft’s alternatives, like Windows Autopatch, Microsoft Intune, and Azure Update Manager. These all happen to run in the cloud, giving up that control WSUS provided by running locally, and they’re not free either – they’re subscription services, of course. I mean, technically WSUS isn’t free either as it’s part of Windows Server, but these cloud services come on top of the cost of Windows Server itself. Nobody escapes the relentless march of subscription costs.
The widely–reported “foo is requesting to bypass the system private window picker and directly access your screen and audio” prompt in Sequoia (which Apple has moved from daily to weekly to now monthly) can be disabled by quitting the app, setting the system date far into the future, opening and using the affected app to trigger the nag, clicking “Allow For One Month”, then restoring the correct date. ↫ tinyapps.org blog Or, and this is a bit of a radical idea, you could use an operating system that doesn’t infantalise its users.