Hewlett-Packard will take a big step toward shaking up its own troubled business and the entire computing industry next year when it releases an operating system for an exotic new computer.
Hewlett-Packard will take a big step toward shaking up its own troubled business and the entire computing industry next year when it releases an operating system for an exotic new computer.
HP is the place where software systems go to die. They are like ICU for terminal cases. |-(
Now, seriously, would love to see a “revolutionary” new OS coming from HP and be surprised by a quite unusual outcome from them. They used to put out good hardware some time ago (not talking about the consumer related crap) but their software used to suck life.
Not to be a “Debbie Downer” on your parade. But HP/UX is still a very nice OS and works like a charm in my experience. Also HPLIP is about the best printer package in the world today.
Sadly, HP/UX is mostly irrelevant. They’ve been losing more and more market share to IBM on the Unix front, which isn’t saying a whole lot since the Unix market in general is shrinking as a whole. The Unix market is now a caretaker market. The majority of the workloads on Unix were designed long ago, and these Unix systems exist primarily to support them. New workloads are mostly going on other platforms (primarily x86). Also, when the older workloads get updated, they’re also mostly being migrated off Unix. Even oldschool relational databases, long the last stand of Unix, are now majority deployed on x86.
(Some figures: http://www.networkworld.com/article/2168940/servers/the-last-days-o…)
Stability is hardly a differentiator, and it hasn’t been for a while. There are plenty of more relevant, just as stable (if not more so) operating systems with a much, much larger ecosystems and innovations that show HP/UX is still fairly stuck in the 90s.
The workloads that run on Unix tend to be very profitable, so we’ll still see them in the datacenter for years to come. But it’s a market that will continue to shrink.
tony,
Don’t conflate “Unix” with system architectures. For example, Solaris (a commercial Unix) has x86 ports.
http://en.wikipedia.org/wiki/Solaris_%28operating_system%29…
Fair point, however the fate is still the same. I would even say that Solaris on x86 is even more irrelevant at this point than Solaris on SPARC, which is saying something.
A revolutionary OS next year? Really? It’s not just an OS but also hardware — for data center servers — that requires them to perfect memristors and will use all fiber instead of copper… and it’s only being covered by MIT Tech Review? Sample code that needs to emulate hardware that won’t exist for, at best, another year when HP claimed 5 years ago it would be on the market 2 years ago? And then it will just be an interim fix based on Linux that will require being completely rebuilt when this hypothetical hardware exists? And they’re actually naming this revolutionary OS, that comes after this interim OS that requires emulation because the hardware doesn’t exist yet, after the 15 year old legacy/transitional Mac API?
Me thinks HP will not be releasing a revolutionary new OS in 2015 and that this is just a hype story to keep HP in the press so that someone may continue to believe that HP is still doing something.
Edited 2014-12-18 18:38 UTC
HP have been sampling memristors from their hardware partners for some time now.
I don’t know if you noticed, but HP is out-performing the market and just today got up-rated to “Buy” by various ratings houses; investors have gone bullish. So apparently HP are doing enough to keep the markets happy, even if you’re ignorant of what that is.
Don’t feign that I lack information on the current state of memristor tech or HP’s financials without any knowledge of what I do or do not know to prop this press release up. Suggesting that HP is in good shape from a financial or from a products/services perspective or that memristors are around the corner in mainstream, data center usage is laughable. Optimism is good but claiming so is silly.
Are you for real? HP is out-performing the market. It’s stock price has nearly doubled in a year.
These “hurr HP is dying what do they make anyway hurr printer ink” posts piss me off.
The same HP that was found guilty of bribery in several countries ?
Yes. It’s actually up about 40% this year (not doubled); however, yes, it’s up well over 100% over 2 years… But it’s still underwater from 4 years ago.
No, I’m not one who thinks HP is just printers and crap consumer PCs (although they certainly are that too), but no, HP regaining some ground doesn’t change my perspective that they’ve been lost, mismanaged, non-innovative, and largely incapable of delivering on their hype for nearly a decade.
Edited 2014-12-18 22:59 UTC
They make pretty decent hardware. I’m quite happy with my ZBook, for example.
which has absolutely nothing to do with technology
If HP aren’t capable of selling technology that people want, they wouldn’t be worth the money they are now. HP server hardware really is great, for example, and they sell a hell of a lot of it. Ditto stuff like HP Helion; the OpenStack market is predicted to be >$3b next year, and HP are well positioned to take a large chunk of that market.
So yes, it is about technology. HP might have fucked up in the past but they’ve certainly managed to turn it around.
You are partly correct that HP’s stock price has nothing to do with technology. The announcement, however, appears to be intended to drive the stock price higher, not just to state that the improbable ultra computer with super duper new OS is just around the corner.
It means they sell quite a lot of printer cartridges.
Me thinks you should watch more closely the tech news: this is old news (google “the machine hp”). It first surfaced back in June when it made a lot of titles in a large number of publications Now there are a few updates to the original story. See for example http://arstechnica.com/information-technology/2014/06/hp-labs-machi…
I’m unclear why you think I haven’t read these stories or that this disproves my point. Yes, HP has been working on this for years, promising it for years, saying it’s just around the corner… How does that negate my assertion that this is just PR for something that seems to always be just around the corner but never quite here yet?
It’s never been due next year before. In 2010, it was due 2013: http://www.technologyreview.com/news/418370/memristor-memory-readie…
Going back to 2008, it seems it was just discovered. No timeframes. It’s not like hard AI, which has always been 15 years into the future. You’re just being silly.
It was discussed in 2010 and was supposed to be available in 2013? Wow, you proved your point.
No, really. This could be cool.
Basically, it sounds like they’re designing a persistent computing environment. I’d expect it’s all one, big, persistent VM space. You don’t need to store things in “files”, there is no “slow storage”, per se. All of the storage is the same speed. “Slow storage” would simply be off machine resources (like, say, Amazon S3 or something similar).
Having this environment changes how things work, how they’re designed, etc.
Now, if they can’t provide the data densities and reliability of a modern system, then it’s all kind of moot.
Many system today run “mostly” in RAM, but they have to jump through hoops to ensure that they can recover their data when the RAM is lost. This system should be able to forego this element of the design (save for some kind of off line backup).
But while they run in RAM, they’re not designed to be restarted from RAM. You can’t take a generic Linux kernel, even on a machine with battery backed up RAM, and restart it without losing all of the RAM contents. Not out of the box, at least, so other changes need to be made to reflect that new reality as well.
At the same time, who knows what speed this persistent memory will be. It may make keeping things in CPU cache even more important to get the system to perform well. Which means we’ll still have the “fast storage/slow storage” dichotomy, but the fast part will be vastly smaller. It won’t be 128GB of RAM against a 5TB hard drive, it’ll be a few MB of cache against a faster storage system.
So, anyway, sounds exciting to me!
Somehow, this characterization sounds familiar. The IBM AS/400 platform, even though being implemented with hard disks and (more or less) traditional RAM, did already have the concept of a “single level storage” and a “database filesystem”. Storage of programs and data was being organized in one single “address space” without the differentiation of “this is in RAM, that is on the disk” or even “on _that_ one disk”. I’m not sure HP had something like that in mind, but at least, there are conceptual similarities.
Of course it does. It creates new requirements, but allows you to drop other traditional concepts that you had to care for from the viewpoint of the OS, the I/O processes and so on.
Exactly. And unlike the AS/400, where the data actually being contained in the “RAM parts” of the single level storage had to be written to the disks (synced) and re-read when the system came up again (restored), using non-volatile storage only could make things easier.
Also consider a “re-use” of RAM that’s already been loaded with a program. If the program needs to be run again, it’s usually loaded again, instead of executing what’s already present in RAM: Consider this in a multi-user environment (or multi-process environment) where several users run many instances of the same program. On the other hand, there are security considerations. Think about memory barriers (for reading and especially for writing) and proper protection and separation…
You need to use core memory if you want your RAM to keep its content when the power goes down, and resume without re-IPL. 🙂
Yep, that is what they are trying to do.
Those kind of things are in development though.
Linux supports kexec (Linux kernel starts a new Linux kernel directly, no powercycle with BIOS):
http://en.wikipedia.org/wiki/Kexec
They are working on PRAM:
http://lwn.net/Articles/557046/
http://criu.org/Usage_scenarios
Which is probably a much more realistic route.
Having to create a completely new type of ooerating system and having to change the way applications are ‘run’ seems less realistic to me.
Lennie,
They talk specifically about it being used for tmpfs, which would certainly have it’s uses, however I’m hoping it will be more generic than that. There are plenty of other applications, such as databases, that need a “PRAM Disk” device rather than a TMPFS file system.
I have a very specific use case in mind actually, I run lots of KVM virtual machines. It would be very cool if these could be persisted in ram across a KEXEC call on the bare metal to load a new kernel. My current procedure is to shut the instances down, and reload the entire system. While it’s possible to snapshot the VM (to disk), and then resume it on top of the new kernel, doing this uses even more slow disk IO overall than a full restart.
RAM is compelling for amazingly good latency and high transfer speeds, but non-volatility is a huge weakness. I’ve researched hardware solutions that tried to provide the best of both worlds.
http://www.ddrdrive.com/menu4.html
http://techreport.com/review/16255/acard-ans-9010-serial-ata-ram-di…
http://techreport.com/review/9312/gigabyte-i-ram-storage-device
I think they would have been more popular if they didn’t cost so much, ie especially when they’re new. They are better than battery backed raid cards because those typically have a small ram cache and many don’t have persistent storage in case the battery runs out. These are safer because the ram disk is written to flash so nothing is lost even with a prolonged outage. Of course, these would be obsoleted by memristors.
You misunderstood. This is exactly the kind of use-case what PRAM is for.
What they are doing is:
CRIU is checkpoint and restore: http://criu.org/Main_Page
They take the running processes and checkpoint them so they later can be restored.
You can use checkpoint and restore for live-migration or to restore a long running batch job because it crashed.
Checkpointing is a lot like snapshotting.
You freeze the process or set of processes. You copy all the memory used by the process and you describe all the open filedescriptors, sockets, etc.
Then later on you can restore the process or processes to it’s original state.
In the case of PRAM, they store the checkpoint-data in a special tmpfs. That tmpfs is meant to persist after running a kexec. After that kexec they can restore the processes from tmpfs which is obviously a lot faster than from HDD or even SDD.
Edited 2014-12-22 07:40 UTC
Lennie,
Actually, I’d specifically not want to checkpoint the QEMU/KVM userspace process. The whole point of a VM is that the state remains *inside* the virtual machine independently of the container it’s running in. The container’s implementation (ie KVM’s binary/libraries) should not be checkpointed, just it’s contents/state. After all, Qemu/KVM already has it’s own VM checkpointing which would be far more appropriate to use IMHO.
Yes that’s what I understood, I just wondered if you could use the PRAM facility without using tmpfs. Consider something like a 5GB database, it could use a memory mapped RAM disk for zero overhead. But if you have to checkpoint the process into a persistent tmpfs file, not only is that lots of unnecessary copying, but the growth in TMPFS is necessarily going to cause lots of swap activity if the RAM isn’t at least twice as large as the database, which defeats the whole point of PRAM, hence why I was wondering if the mechanism could be used more generically.
Edit:
I guess you could actually mount a loopback block device backed by a TMPFS file, I don’t know if this indirection matters for performance, but it still seems odd to me that a pure block device wouldn’t be supported. I also don’t know how stable the tmpfs structures are? Changes to the structure between kernels might cause problems that didn’t matter before.
Edited 2014-12-22 09:38 UTC
If you think a VM has all the state inside and there is no outside state, you are just fooling yourself. 🙂
What I think is a more interresting/better phrased question is:
Does, euh will, the kexec/PRAM implementation keep the memory of the original process in place and only save the state in the PRAM tmpfs or do you have to copy all the data.
I don’t think it does that yet.
Wouldn’t be surprised if there are technical reasons for that.
Which obviously starts with: how do you want to handle reserving a piece of memory that isn’t really in use if you don’t know if it will be used after a kexec-warmboot.
An other thing I’m thinking about is:
does it use some kind of streaming behaviour.
So when you copy the memory content of a large process does it destroy parts of that memory while it is being copied to tmpfs.
Kind of like moving memory.
It probably doens’t in this patch.
So in my mind mmap is last on the list. 😉
EDIT:
Then again, after reading the description again it does say:
“This patchset implements persistent over-kexec memory storage or PRAM, which is
intended to be used for saving memory pages of the currently executing kernel
and restoring them after a kexec in the newly booted one. This can be utilized
for speeding up reboot by leaving process memory and/or FS caches in-place.”
Notice the in-place part.
I believe mmap is a function of the filesystem cache.
So it might work better than we both thought. 🙂
Edited 2014-12-22 09:58 UTC
Lennie,
Why would you say that? It works for me, are there virtual components that give you problems? Unless there are extenuating circumstances I still think checkpointing the VM is more ideal than checkpointing the process containing the VM. After all you are going to need to update libvirt/Qemu at some point, criu merely keeps the the same old version running indefinitely.
I really haven’t used it, but it seems the work criu has to do will be more complex because lots of external state needs to get roped in (ie libvirt and service monitors, etc). I think there would be subtle problems with scripts that allocate resources dynamically for VMs (say tap4 or whatever), these must not get out of sync or else routes, iptables rules, and cleanup code (tunctl -d) will break. It would seem to me that a fundamental problem with criu is that it can’t guarantee that a resource (like tap4) will be the same on restore as when the process was snapshot. It’s very late here, I hope this is making sense.
http://criu.org/What_cannot_be_checkpointed
So it sounds like the mechanism itself may be generic, I just wonder if there’ll be a way for userspace to tap in.
The virtual components talk to paravirtualized devices which also have state.
CRIU is useful when you want to update the kernel of the host ‘without’ downtime (similar in downtime as livemigration).
This is part of the external state of a VM I mentioned.
This problem exists for VMs as well as anything else you’d want to checkpoint.
For a VM this might be less. But if you are check-pointing a Linux container, it becomes a lot more predictable. Similar to a VM.
Anyway, I do know they want to do this with KVM (keep memory in-place during kexec). If they want to use CRIU I don’t know, but using PRAM would seem logical.
No worries, you are making sense. 🙂
The list is getting smaller and smaller every year.
Obviously that list will never reach zero. For example when you connect a process to a real device. But you can do that with a VM too. Just look up PCI passthrough.
CRIU is just a tool that works from userspace that talks to all the API’s of the kernel to get the filedescriptor-numbers and everything else that is needed to record and later restore a (set of) process(es).
So, yes, that is actually how it works.
Lennie,
I honestly think CRIU is a bad fit. We can create containers around our VM toolchains, but with the state information spaning across processes, namespaces, kernel handles, flags, there’s a lot of “incidental” state that doesn’t need to be there. Even a copy of the bash shell used to launch the VM in the container contains state and therefor needs to be checkpointed. The big problem I can’t see an answer for with checkpointed processes is that they continue to propagate indefinitely without updates until they are restarted. Native VM snapshots don’t suffer from this and can be updated independently. Also having self contained VMs+virtual devices means they can migrate across architectures and operating systems.
I never mentioned they would use CRIU for KVM.
I’m just saying:
I think they want to do kexec-persist with KVM too.
I don’t know how they will do it.
It would make sense to me if they only use the PRAM part.
Lennie,
Qemu could do it today by writing it’s snapshot to PRAM, but this would require RAM to hold two copies of the VM’s data.
It turns out the Qemu lets you specify where to map the guest memory to:
http://wiki.qemu.org/KQemu/Doc
So we could easily map the VM to a PRAM file from the start, but I don’t think the existing ‘savevm’ command allows us to snapshot the VM minus the full memory dump. It should be a minor change. Without a full memory dump to save/load, the snapshot operations should be nearly instant.
http://qemu.weilnetz.de/qemu-doc.html#vm_005fsnapshots
Edited 2014-12-22 16:07 UTC
PRAM seems to me, more like a way to mark which sections should be considered reserved at kexec-time.
So you don’t need to do things ‘from the start’
I’m wondering if Linux++ is going to “enhance” Linux the same as C++ did for C… That plus plus stuff also sounds so much like 90s.
BeOS was meant to revolutionize many aspect of computers. Without a strong marketing, even a better solution aim to fail because consumers are not the brighter people able to understand the benefit of the new technology.
Now, ladies en gentleman, remember that HP had Palm’s WebOS in their hand, another ‘revolutionary’ operating system. How many instance are they going to fail before success ?
Their should call their hardware Mist.IC and software Vap.OR
Let’s hope they will never acquire Android or Windows!
Those were consumer focused systems. This is big big business.
If HP can develop a machine/OS that requires even 50% less power than the competition with the same output (they hope better) then the big cloud players like amazon, google and Facebook will be climbing over each other to get a bit of the action.
The competition will take years to compete effectively (I assume R+D = patents). IF (big if) HP price it competitively, they will re-invent the market almost overnight.
And later Apple will release ObjectiveLinux because they won’t be happy to use Linux++ like everybody else.
And outsmart everybody else in the process.
Yeah, just like they did when they revolutionised university textbooks.
Does this mean 2015 is the year of the Linux++ desktop?
Maybe if you’re the lucky owner of “an exotic new computer”.
But probably not.
Maybe if you’re really rich. I’d bet HP is aiming these things at the enterprise market and the cheapest one will be $100,000.
I’d love to be wrong though. It would be amazing if HP would release an enthusiast model at a few thousand. If it was $3,000 and could run most regular Linux software I’d get one.
If they could get one down to a few tens of thousands I bet there is at least one Linux group or a maker group (often the same people) who would band together to get one for their hackerspace. Assuming that the hardware really is as awesome as HP thinks it will be.
<nelson>Ha-Haaaa!</nelson>
Because we don’t have enough operating systems?
We don’t have a modern one that’s designed to work efficiently with only high speed non volitle memory.
Its a different way of thinking about things. Really, were just now starting to get systems that deal with non rotational disks in a sane way. This is kind of like that, but bigger.
Bill Shooter of Bul,
It really doesn’t sound all that much different from an ordinary block device with better performance/energy characteristics. It would be a great candidate for replacing NAND-flash SSDs. If it’s fast & reliable enough, maybe DRAM too such that we can make RAM drives persistent. Of course anything that comes along with better capacity/performance/energy conservation is good news. But beyond this I’m not quite sure what HP is going for with this “revolutionary new os”, is there anything of substance here or is it just marketing speak?
I hadn’t watched the video, it’s more interesting than the article.
http://www.hpl.hp.com/research/systems-research/themachine/
I’d love to have one and see it gain acceptance beyond just a tiny niche. We need more competition in system architectures. I can’t help but think it’s going to be very difficult for HP to gain market share against much cheaper commodity platforms. Even intel couldn’t compete with the x86.
If/when such hardware/software designs come to fruition, it will most certainly be revolutionary. Not only will the new hardware capabilities require entirely new architectures for the operating system but also new programming paradigms as well — memory-management will be handled completely differently.
However, any skepticism I have for this “revolution” lies in HPs ability to deliver this first, in a timely fashion, and in product/services that are competitive with traditional data center computing in anything close to the near term.
tf123,
I’m not really convinced that it would “require entirely new architectures for the operating system”. Consider that existing architectures with memory mapped block devices already exist and CPUs can already access data/execute code directly as though they were memory. This is common in embedded devices that have very little ram. The only reason we don’t use this technique more today is because DRAM is so much faster. Memristors have the potential to be a high performance memory mapped block device executing at native memory speeds. This is obviously great for performance, but I actually don’t see that much needing to change to support them.
I hope HP does bring us new innovative designs for architecture/OS (and I hope it’s open source). If the benefits of memristors were to be exclusive to HP’s architecture, then that could be a killer feature for it. However my prediction is that, should memristors prove viable, they would quickly get manufactured for x86. And HP will find itself competing with the x86 systems of tomorrow rather than those of today. Even if HP brings about real architectural improvements, it still might not be enough to drive businesses/consumers away from x86, especially if they don’t produce something compatible.
http://www.zdnet.com/pictures/intels-victims-eight-would-be-giant-k…
Edited 2014-12-20 00:24 UTC
From a long time operating systems presuppose we retrieve data from a very fast and from a very slow storage (memory and disk, usually). Because of that, many mechanisms of caching, non-blocking IO and IO schedulers were created. They are deep embedded on our kernels and memristors do have the potential to shake these things very hard, turning them almost redundant on some cases. Even considerations about our processor cache and ram access will be affected as it may impact positively memory access as there will no need to ram refresh.
It, obviously, has positive consequences on databases and desktops environments.
So, if it does keep on its premises and promises, it will start a probably slow revolution.
HP/UX == HPSucks
Tandem left to wither
VMS ditto
They had some real jewels but their top brass couldn’t see past their [redacted] and now they suddenly realise that they are beholden to their Redmond and Intel overlords.
With the Itanium hardware clearly going nowhere even slowly they are in dire straights if they want to present a creditable alternative to the gazillion Chinese (and Texan) box shifters.
As a soon to be HP Pensioner, I just don’t think they have a creditable vision for the future of the company. Sure the stock price may be good but we all know how fickle Wall St is.
It is getting close to the time when it might be better take my pension while there is still a company.
with that attack HP will take over the marketshare of Android and IOS within a week.
pica
PS Just kidding
For everyone that is mentioning WebOS…that was consumer/printer. This is Enterprise with a big capital E. These people have different mentalities, bosses, budgets, timeframes etc.
That said, I will take bets to anyone willing to bet that they are going to release this hardware + written-from-the-ground-up-software in 2015 (and no, a kernel patch on Linux doesn’t count)
Is it Windows 10? I bet it’s Windows 10.
Seriously, there’s no way an OS research/development group at HP could have survived the company’s flailing around over the past decade or so.
Given HP’s history, it is perhaps fitting that ‘revolution’ originally meant ‘going round in circles’.
Kinda seems a return to their roots to me if that’s possible, they’ve at least “tried” to kick the Windows habit with WebOS, HP-UX and anyone remember the HP NewWave shell for Windows, kinda like IBM’s Presentation Manager object oriented OS/GUI and running on top of Windows. Like Presentation Manager, I found it too out there for me but even today don’t buy into the full object oriented paradigm, sure some things make sense as objects but not everything. Call me old fashioned, maybe because of my DOS roots, I’d still rather open my files after I’ve loaded the app I want to work with, rather than “templates” at the OS level, etc. but that’s maybe because no system ever went far enough. Like Apple’s OpenDoc, never did get to that promise of rip off from a palette the Lotus 123 spreadsheet tool for the numbers, Harvard Graphics tool palette for chart, WordPerfect tools for the text. Don’t have a lot of hope for HP’s domination in the OS area given their history, but here’s a fun look back at NewWave at least. And I applaud HP for at least trying…maybe there’s still some of that old HP innovation left at the company after all.
http://toastytech.com/guis/nw.html
http://www.guidebookgallery.org/articles/hpsnewwave
http://www.guidebookgallery.org/articles/newwave40
Edited 2014-12-19 23:00 UTC
1
Browser: Mozilla/4.0 (compatible; Synapse)
I don’t think HP would ever have a chance. I think some Chinese company would take over HP in the long run. But I hope HP-UX would become open source in the long run because I had a very positive memory of using HP-UX after all.
“Revolutionary”. There is no “revolutionary” operating system. They all do the same basic stuff.
“Low-jitter” though is good.
Do that well and atleast have a well-functioning operating system.
https://www.facebook.com/notes/ove-karlsen/are-you-not-tired-of-late…
I think it is great to see HP innovating. But the “new operating system” thing I don’t see happening.
The article/communique talks of using Linux++ and then introducing a whole new OS. While I can see using Linux++ for initial implementation (i. e. faster time to implementation, the resources required to roll out) implementing a new OS if people have already invested in something that actually works seems to be thermodynamically prohibitive from an intuitive standpoint. History seems to reflect that there is a huge activation energy to creating and implementing a new OS, especially if there is already one entrenched. Thoughts?
I submitted this to “OSnews”. They didn´t post it, and a user voted it down:
https://www.facebook.com/notes/ove-karlsen/are-you-not-tired-of-late…
That is how it has always been on OSnews. All the way back to when it was BeOSNews, with favorite qoute being Jean-Louis Gasse talking about his stiff nipples.
That is a real view on operating systems. Far above the level of these fanbois.
What can one say, other than that the internet has become a sectarian place, with little regard for real facts, just like any sect.
And in one article, an acidhead is a lone programmer “of God”, and ridiculed, while Eugenia Loli of BeOSNews/OSNews is known for hallucinogenic art, just as schizophrenic. That is why communication is so difficult with them.
Looking for the next revolution, they will never win. Thors hammer, or the psychiatric pill will not help them. Or indeed a phallos-object.
Plain and dry facts, is what this is about. Which they ignore and do not post
Edited 2014-12-23 12:29 UTC
“Watch the revolution of SchizoNerd, King of the talking hammer, does the magic penis-pulling trick he learned by tripping!” probably rather should be the tagline here, and plenty of places. And then we also know schizophrenia is a very very common disorder, from what we see online. And the apparatus that claims to help it, is ofcourse just another “Thors Hammer”. Maybe the most correct of it is “cognitive therapy”. Learn to recognize what is, rather than fantasies.
For instance John Lennon said he thought an elevator light was a fire. That is an elevator light. Etc. As to true religious experience, that has nothing to do with hallucinogenic drugs.
Schizoprenia is closely related to belief in several gods, and that is what synasthesia must induce aswell.
“God has no partner in others”.
PBWY.