When we last heard from Perl, Perl 6 was going off on its own becoming Raku, Perl 5 was going to continue until version 5.36 which would serve as the basis for Perl NG, and Perl NG would be known as Perl 7 because Raku burned the Perl 6 namespace. No one saw the humor in “not that Perl 6, the other Perl 6”.
Anyway, the Perl Steering Committee (PSC) decided to write a blog post about the future of Perl and Perl 7.
The first PSC was elected in late 2020, and one of our first tasks was to create a plan for the future of Perl, and to put that in motion. A lot of discussion and iteration followed, but the strategy we agreed is:
1. Existing sensibly-written Perl 5 code should continue to run under future releases of Perl. Sometimes this won’t be possible, for example if a security bug requires a change that breaks backward compatibility.
[…]
2. We want to drive the language forwards, increasing the rate at which new features are introduced. This resulted in the introduction of the RFC process, which anyone can use to propose language changes.
3. We want to make it easy for people to use these new features, and want to do what we can to encourage their adoption.At some point in the future, the PSC may decide that the set of features, taken together, represent a big enough step forward to justify a new baseline for Perl. If that happens, then the version will be bumped to 7.0.
So basically, nothing is going to change. Perl 5 will continue on into infinity adding features as it has been doing.
I have such a love/hate relationship with perl. I’d dig into it more, but it doesn’t have anything compelling to me anymore. Lots of baggage. Raku just took too long. This proposal is what maybe should have happened 20 years ago.
I still use perl in place of bash scripting, which I have a hate/hate relationship with, haha. As arcane as perl is, it does everything better than bash IMHO. There may be other scripting languages I’d consider over perl but the fact that it’s available by default on most distros is a plus.
I wouldn’t want to use perl for a complex project, but I appreciate that it is very capable when needed. No problems reading files/parsing strings/creating data structures/etc, it fits the bill. I do have gripes with it, such as having way too many libraries that try to do the same thing (like date & time libraries).
Yes, have to agree.
PERL remains very utilitarian even if it is a box of Lego at times, but so often I can find a ready made solution in a PERL library and you just won’t find a better solution for in any other scripting language. Some might paint being stuck in 5.7.x as PERL hell, and maybe we don’t see many performance and capability increases, but it’s slow evolution has it’s advantages as well. Often I find PERL can be the quick fix you need until you have time for the more permanent solution.
I mostly have a love relationship with Perl, I know it inside out and manage to avoid the pitfalls these days. I still find it amazing that after all those years almost any programming paradigm is possible (but never any forced!) in Perl. I can’t live without hashes out of the box anymore and dread if i have to code in a language that lacks all the string manipulation and functions like. grep, map, split, pop, push, etc. Don’t forget 10000s of awesome libraries too. Great community.
While Perl is missing some language robustness and is ideosyncratic at times (flat lists?), many people who complain about the readibility of Perl simply aren’t fluent enough in it.
Hence replies like this on the Register https://forums.theregister.com/forum/all/2022/05/26/perl_v7/
(first comment) irritate the crap out of me.
The first person blames Perl for the unusual way localtime handles years, not knowing it has nothing to do with Perl, but all with standard C Posix functions.
Raku is what Perl 7 should have been and that language for me fixed all the ugly bits. But it’s probably too late and its implementation to slow for generic use.
It seems the decision to make use of ‘use v5.xx’, makes complete sense.
I actually agree with the poster that it’s bad. You are right about the lineage of course but even in C people hate doing that. It was always a bad API for both C and perl. We can’t fix it now without breaking everything and the poster dismisses just how big a problem that would cause for compatibility.
The truth is there were tons of short sighted decisions from the early days of computing that still affect us now, but it is what it is. Personally I am critical of 32bit IPs and 1500byte MTUs, which of course were fine when they were invented, but it’s so disappointing that we continue to be held back by them so many decades later.
A lot of the compatibility problems are exaggerated by broken attempts of solving them. IPv4 is a good example. Instead of incrementally fixing the issue of the limited address space we have a wholesale solution that will never(*) get adopted outside of tunnels or internal networks behind a NAT. IPv6 from beginning should have been IPv4 compatible. Why change the header format at all? Just point it to one of IPv4 to IPv6 bridges and put all the new stuff at the beginning of the payload. Yes, that’s an ugly solution from the engineering point of view but it is the only way of deploying such a change.
*) When prices of IP numbers become truly prohibitive they will split the network into these who must and can afford them (shared servers, public IPs of network providers) and everyone else behind a NAT.
You can do what you suggested right now, but the network equipment to do on a large scale it didn’t exist. Nobody is going to pay for it. It’s not a feature.
40% of Google visitors are on IPv6 right now, 5% is added every year. So an other 10 years or less.
Lennie,
Of course the equipment doesn’t exist, there is no such standard. It there was, it would initially be enough to deploy NAT-like proxies at both ends and gradually add support for the new protocol to routers in between, to opportunistically remove the need for them.
The issue with IPv6 is that having IPv6-only services is not an option until the adoption rate reaches 90%+. This simply won’t happen any time soon, if ever. Typical chicken and egg problem.. Current IPv6 is driven mainly by mobile networks and we won’t reach the critical mass required to switching off IPv4 services until majority of legacy networks are converted as well.
ndrw,
Yeah they could have changed less in the protocol and that way the IPv6 protocol could be used to talk to IPv4 hosts. I agree this has merit. But you’d still end up with some kind of NAT requirement though due to the differing address space sizes, and NAT to me is the main problem. Breaking peer to peer connectivity is a bad outcome, although companies pushing centralized solutions probably benefited from it.
Alfman,
NAT is unavoidable, it is the only practical solution to shrinking address space we have. I agree this is bad – it makes Internet even more centralised. The point is in having a “NAT” that enables people on IPv4 access IPv6 services.
The key problem with IPv6 is it doesn’t allow IPv4 services to be switched off until almost everyone is on the new standard. Until that happens every server still needs IPv4, even if it means going shared (using Facebook, Google or Microsoft platforms). IPv6 adoption may look impressive but it is mostly coming from mobile, where it forms a big NAT with only some large services available directly. Good luck converting millions of corporate IT systems, independent websites or mail servers. I still keep seeing installation guides starting with “step 1: disable IPv6”.
ndrw,
I assume you meant enabling people on IPv6 to access IPv4 services? People on IPv4 cannot access IPv6 even with NAT due to insufficient address information in IPv4 packets.
It may be possible to use proxy servers instead of NAT however. IPv4 users would use IPv4 connections to the proxy server, which could then originate a native IPv6 session. It would be expensive to scale since every connection would have to open two stateful sockets as well as perform DNS requests. Also many applications don’t support proxy servers. And a proxy server would share some of the same pitfalls as NAT for inbound connections.
Well, in a way that’s sort of the way things work today. The industry is mostly fine with this modern asymmetric design for the internet.
Alfman,
I meant an extension to IPv4 on clients only that adds IPv6 addresses in payload. Such packets would have been routed through an existing IPv4 network but instead of arriving at a destination they would end up in an IPv4/IPv4+/IPv6 proxy. If the network was later extended to support IPv4+ or IPv6 addressing the packets would skip the proxy and go straight to the destination.
Accessing IPv6 hosts through IPv4 networks should have been a design requirement from the beginning. Yes, it would be a trade off, and it would probably not allow some extra IPv6 features to be bundled on top if it, but we would have a working IPv6 network by now. Something IPv6 will never deliver because as long as it is not fully deployed it requires all IPv4 services to be up and running.
ndrw,
Ok. I’m not really sure how well routers and firewalls deal with new IPv4 extensions. They should pass them through but given how restrictive some policies are I wouldn’t be surprised if even that doesn’t work. This would break a day-1 switchover, but presumably any such problems would get fixed to be compatible.
One problem is that just because your local and ISP routers support IPv6 doesn’t imply that all intermediary routers do, so it seems like your proposal could conceivably create multiple islands of IPv6 networks that are not directly connected to one another (except via IPv4). To mitigate this packets might have to flip between IPv4 and IPv6 modes several times traversing the network to reach the final destination. IPv6 routers would have to support this from day one to ensure traversal across IPv4 networks that haven’t been upgraded.
This would require IPv4 packets to be altered along the path by IPv4+6 routers, which although possible I think it would have faced resistance in the standards processes.
It’s an interesting idea though and in theory it could continue to be routable even after going through IPv4 carrier grade NAT as long as there was an IPv6 router connected behind the NAT. But I do wonder how BGP would even build the routing tables if IPv6 routers had IPv4 neighbors.
Maybe we choose not to solve the problem, but then a lot of cases wouldn’t work day 1 for a lot of users and it could result in another long term lack of support.
Google says it’s 40% of their users are on IPv6. And still going up at the same rate. At the start of the pandemic it was less than 30%.
For MTU on IPv6 their is a solution if you want REALLY big ones: rfc2675
Lennie,
I’ll take those numbers for granted, but I going to guess that most of those with IPv6 are in new & developing markets because there just aren’t any IPv4 addresses left to give out. My guess is mobile users are more likely to have IPv6 for this reason as well. But since an awful lot of the internet is still IPv4 carrier grade NAT could be required for a long time.
Jumbo packets have existed for a very long time and most consumer equipment can handle them fine, but my gripe is that they’re not typically routable on the internet itself. This is not because the standards don’t allow it, but because so many ISPs and backbone providers won’t support it (and don’t even support path MTU detection). I haven’t found an ISP that supports them yet, not that it would matter because my area is served by a monopoly. Nothing I can do about either problem.
It sucks when your using a VPN and the MTU becomes even smaller than normal.
Alfman,
You might be onto something.
I queried “what is my ip” on Google on my cell, and it returned an ipv6 address.
On the laptop, though it was ipv4.
And that wat not because my home connection does not have ipv6. If I use Comcast’s own router, that too brings an ipv6 address. However my regular wifi system (Google’s own Nest Wifi) uses ipv4 by default, and ipv6 needs to be enable manually:
https://support.google.com/googlenest/answer/6361450?hl=en
Correction: It was enabled on my system, and the phone received an ipv6 address. But it still defaulted to ipv4 for some reason. Interesting…
sukru,
Yeah, this will all be ironed out World IPv6 Launch Day #3. It seems we need one of those every decade ๐
Like most people in my position, I had to experiment with IPv6 through a tunnel provider.
https://ipv6.he.net/
It’s been a few years since I’ve tried it, but IPv6 was a no-go for me due to a longstanding peering disputes between backbone providers. The result left the IPv6 internet fragmented and I couldn’t connect to my own servers over IPv6 from HE’s tunnel.
http://www.agwa.name/blog/post/working_around_the_he_cogent_ipv6_peering_dispute
http://www.datacenterknowledge.com/archives/2009/10/22/peering-disputes-migrate-to-ipv6
Even android’s lack of dhcpv6 creates absolutely stupid subneting limitations. I hate having to use NAT to fix it.
http://www.techrepublic.com/article/androids-lack-of-dhcpv6-support-poses-security-and-ipv6-deployment-issues/
I’ve given up on IPv6 until the industry is able to provide more consistent and reliable IPv6 service.
Here is a better link about android’s lack of DHCPv6 because the cause of problems for network admins.
https://www.nullzero.co.uk/android-does-not-support-dhcpv6-and-google-wont-fix-that/
IPv6 has so many addresses and yet because google won’t support the standards administrators cannot subnet and manage their own IPv6 networks. Of all the vendors, google is the only one to hold us back in this way. Ironically google’s forced policy pressures network admins to continue using IPv4 over IPv6 for local networking.
Edit: Bah, my other post about IPv6 is in moderation.
Alfman,
I am not sure about the DHCPv6 situation. However my case is very strange.
With another router, I get both ipv4 and ipv6, and google.com resolves to ipv6
With Google Wifi, I still get both ipv4 and ipv6, but google.com resolves to ipv4.
This is the same exact home connection. Nothing else changed. And that is more puzzling.
sukru,
I don’t know what OS you are using, but using linux a simple “ping google.com” command performs two DNS requests (captured by tcpdump or wireshark).
And for me both requests successfully return appropriate addresses. The IPv6 version of ping (ping -6) fails immediately with “network is unreachable”, but IPv4 works.
For your case I’d be curious to see if the IPv6 DNS request is attempted and responded to?
If it is not, then I’d be inclined to believe that your Google Wifi has an ipv4 resolver only, but you should be able to use a different DNS resolver.
My local DNS server…
Google’s public DNS server (should always work even if local resolver is broken)…
If your DNS resolves google’s IPv6 address, then the next thing to check would be if your machine got a valid IPv6 address and route and in my case this fails.
If IPv6 is not configured on the machine, then it would suggest an issue with Google Wifi’s DHCPv6 and/or SLAAC prevented your machine’s IPv6 stack from initializing.
If IPv6 is configured then the next thing I’d check is whether the ICMP packet from “ping -6 google.com” is getting sent and received on your LAN as well as the public facing WAN side if possible.
Alfman,
this was Android, Chrome and Google Search, over Google Wifi thru Comcast.
In theory it should work. In practice it works if I swap Google Wifi with Comcast xFi.
sukru,
Ok, but do you have a regular computer that you can diagnose the wifi network from? Maybe you can try some of those tests on android too, but I can tell you it wouldn’t be my first choice. Android being striped down to the bare necessities makes it a lot harder to diagnose problems without the usual netadmin tools. I don’t imagine you’ll even be able to get wire traces without a computer.
I’ve come to appreciate Perl in the last few years. All the normal shell functions plus real data structures is nice.
I would like better error handling and consistent sigils though. I’m not a fan of handing problems with “eval” statements, and “@somearray[5]” instead of “$somearray[5]”.
How is Raku? I started looking at it the other day, mainly after jokingly suggesting web4 would replace JS with Raku. Some people seemed to think it was a good idea.
Only briefly tried Raku, it has a lot of cool features like grammar and infinite sequences. All the inconsistencies of Perl seem to be gone. No more ‘only perl can compile perl’ stuff, multi-dispatch and the list goes on and on.
Infinite sequences?
Cool. I’ll check it out and see where it fits in the toolbox.
What I mean is lazy lists (and evaluation)
See:
https://docs.raku.org/language/list#Lazy_lists
You can define arrays using formulas. Also cool is that you can safely compare rational numbers.
Wondercool,
Thanks!
Python happened…
Egh. Python is a different animal. It’s not as good at being a shell replacement as Perl is.
I wouldn’t call Perl a shell replacement.
Python can do just about everything Perl can do in terms of scripting and much much more. And with all the web frameworks for python, Perl became less and less entrenched in CGI.
Perl is not as lightweight as Bash (or any other shell) and not as featured as Python. So unless you’re already invested in it, there really is not much interest in it.
As far as I know, Perl can do more or less the same as Python, except Python is slower and is worse at string manipulation. Problem with Perl has never been features but noisyness of the code and losing the web battle to PHP, Ruby and JS, Perl was never promoted as a shell replacement, far from.
@Wondercool have to agree on all that.
I was actually watching some young people do Ruby stuff the other day, it’s another language that was full of promise to be like Perl but only faster. I was becoming hooked at first, but after a while drifted back to Perl, mostly because back then Ruby didn’t have Perl’s vast libraries. Without having looked at it I suspect Ruby still doesn’t.
There is so much appeal for things that claim to be concise and fast, until you realise you might have to write your own libraries to add a full stop!
Perl’s string manipulation is pretty awesome. ๐
PerlTimeLine: 1987
https://history.perl.org/PerlTimeline.html
Combining the best features of C, sed, awk, and sh certainly sounds likes Larry intended Perl to be a shell script replacement.
Perl’s cruft was too much. ” noisyness of the code” as you put it. Python is nice because it forced readability. Fast enough, big enough library, Secure enough, Easy enough to learn.
Bill Shooter of Bul,
I personally don’t like whitespace being significant, but it’s entirely subjective like all coding standards are. I prefer tabs, and “if (…) {” to be on the same line but others specify styles differently.
IMHO ideally it would be best to commit all source code in a normalized form and then everyone’s editors could beautify source code to their personal tastes on screen. This isn’t possible when everyone is using different editors, but an alternative would be to automatically apply local beautification when the code gets checked out and then automatically normalize it when it is checked in. Maybe some companies are doing this already?
I used python in a project that needed to monitor postgresql for asynchronous alerts, Maybe my expectations of python were too high for this project, but I found that python concurrency was flaky and buggy under certain edge cases. We didn’t have much time to fix things so I turned to async notification + polling as a backup for when the async event would fail. Python 2->3 transition caused us grief too when libraries turned out to be incompatible.
Obviously many people are happy with python too, it’s just hard to please everyone. YMMV ๐
No single tool is best at everything, which is ok as long as there are good choices!
Perl is a shell script interpreter replacement. That’s how I and many other users got introduced into it. It was so much better than shell: no need to worry about availability or compatibility of Unix utilities, rich data structures and functions available out of the box, newish perl interpreter installed on all systems. While I didn’t use CPAN much it was there as well.
However, I do agree Python is a Perl replacement. It provides most of Perl functionality and more. Perhaps in a more verbose format but that has its merits too. For a long time the problem with Python was its instability, base functionality was poorly defined and changing often, but this has settled down a bit and installing a specific version of Python is usually not a deal breaker either.
ndrw,
I was actually introduced to perl for CGI web forms, but of course I don’t do that anymore. It remains useful for shell scripting and string processing is a strong point. I’ll whip something up in perl whenever GNU file processing tools are too limited for a task. “cut” “join” “grep” etc are powerful when strung together, but man it gets frustrating. I find it to be just like using minecraft redstone to do anything useful, haha.
I had the same issues with python (and PHP for that matter, but let’s not go there!).
Out of the box, Python doesn’t have the sysadmin capabilities of Perl. Python has a different focus then Perl does.
I would, and I do. Perl started as an awk replacement, and grew from there. The moment a shell script gets more complicated then running a few commands, I switch to Perl.
As others have pointed out, there are better options depending on what the problem is. For scripting and sysadmin tasks, Perl is really nice. Maybe even for GCI web stuff. Perl integrates into *nix-like systems better then others. Working with the OS is more transparent with Perl. Perl feels like it’s part of the OS, because it kind of is.
As for Python… As long as I don’t have to interact with the base OS, Python is great. It fits the model of a more formal programming language, and since it’s more formal, it doesn’t integrate into the OS as nicely. Python sits on top of the OS.
Perl and Python have different ergonomics. A lot of things I do in Perl I wouldn’t want to do in Python, and vice versa.
And none are as light or fast as C.
Or C++.
Or Go.
Or Haskell.
Or Rust.
Or any number of languages.
@Flatland_Spider
That’s really the crux of the debate, and while Perl isn’t as lightweight as Rust or other similar derivatives it still has it’s place.
Even though you can use it as a shell replacement I wouldn’t describe it as a shell replacement. Really many or perhaps even any of the languages listed above could be shell replacements with enough effort. But that seems to defeat the purpose of a shell. For example, on Windows platforms I could certainly use Powershell instead of Perl, but I wouldn’t because part of the Perl 5 utility is that it is almost as ubiquitous as you can get in terms of platform availability!
cpcf,
Are people getting confused when I say “Perl is a shell replacement”?
I find Perl is a good replacement for shell scripts. I’d rather write Perl then shell scripts.
Obviously, I’m not going to switch my default shell from zsh/ksh/tcsh to Perl. XD
I’m going to say they can’t be shell script replacements since that is not what they’re trying to do. They don’t integrate themselves into the OS as well as Perl or Shell does. They are languages for writing applications; Perl and Shell are more like automation tools.
I’ve tried many languages, and they require a lot more code to do things which Perl handles very concisely with builtin functions.
FreeBSD picked Lua as the scripting language in base to bridge the gap between C and Shell, mainly due to it’s size, but out of the box, it’s not a good replacement as far as builtin features go.
OpenWRT uses Lua instead of Perl, and kind of the same problem. Lua takes more code to drive the OS then Perl does. (I wasn’t going to install Perl or Python because OpenWRT upgrades plus packages is a trainwreck.)
I’m not faulting either project because it makes sense for both (I might fault OpenWRT a little), but Perl’s sysadmin capabilities have yet to be matched.
Flatland_Spider,
I know what you meant. I wouldn’t know how to use perl as a “shell replacement”, but it so much better than shell scripting hands down.
Here here ๐
@Flatland_Spider
Sorry for the confusion, that’s just me being taking what I’ve read too literally, of course you are correct in meaning and intent and I should have realised your comments were in relation to scripting and not globally applicable to the shell.
cpcf,
It’s cool. It occurred to me people in this chain were being too literal. ๐
alfman,
People have tried using the Python REPL as a shell. If there is a Perl REPL out there, it might work.
Devel::REPL maybe? https://metacpan.org/pod/Devel::REPL
Sometimes a benevolent dictator being too benevolent is a bad thing. This is what happened to both Perl 6 and Perl 7.
What Larry should have done is declared Perl 5 sunsetting, all development other than critical safety issues to be done towards the new RFC and the reference implementation, and be gone with it.
Instead they came with this “transition” strategy where Perl5 development would go as usual until Perl 6 where “ready” (without a clearly defined finish line), fragmenting the community, creating lots of schisms between core developers (some of them with a vested interest in keeping Perl5 going on forever as it is) and confusing the Perl users and module creators… all in a community that where already quickly shrinking back when the first Perl 6 RFC where released.
Perl killed itself. It will never go away, as COBOL never did, but it will never be popular again, and nothing really novel will began with it.
@CapEnt
Yet so much is laced together with Perl still, I feel it quite bizarre how often I cross paths with Perl and where.
I wouldn’t be predicting it’s demise just yet!
I suppose the most useful and productive language is the one you know and use, I suspect Perl has a lot of use left.
Perl is one of those languages, at least for me, that I might not look at for three years, but when I do it’s like an old familiar friend.
Actually, quite the opposite. Perl 5 was just fine (like Python 2). There was never any need for incompatible changes and deprecating it. Especially for Perl that was a big deal as compatibility and stability was its killer feature. I have no idea why this has been done, developers could have worked on Perl 5 till now. Why not put it on a VM with a JIT, or add modern features and on a side? Looks like now they are trying to correct this exact mistake with Perl 7 but the damage has already been done and most users have moved on.
There are a few things Perl 5 could clean up: consistent sigils on arrays and hashes, better error handling. The error handling is a problem for me outside of a basic script. It’s fine when something can just die, but outside of a simple script, that’s generally not what I want to happen.
As it is, Perl 5 is fine for it’s niche, and the niche I use it in. ๐ It doesn’t need a JIT or OO as I have other options I can use.
I do like Python 3 better then Python 2.
There are many things we would like to change (be it in Perl 5, Python 2, IPv4) but often such changes are not worth the cost or are simply impossible because of non-technical requirements, social dynamics etc. The project is likely to die during the process (Perl), suffer slower adoption for years (Python), or the change will simply never happen (IPv6).
Yeah, we all have personal preferences, and not everyone else agrees with us. ๐
I think people were trying to stretch Perl into something it really wasn’t and better options came along.
IPv6 is kind of a bag of bad decisions. It’s biggest problem seems to be it’s centralized and makes the assumption that the people in charge of the network aren’t a**clowns.
Python at least stuck to semantic versioning rather then breaking everything in the middle of a major version.
I mean a lot of it was how perl 6 was developed by the community over a long timespan, and no pressure for a deadline for even the specification to be finished. Just took too dang long with too many side adventures like the Parrot VM.
A benevolent dictator being too benevolent, and a not enough of a dictator is a bad thing.