A customer had a program that opened a very large spreadsheet in Excel. Very large, like over 300,000 rows. They then selected all of the rows in the very large spreadsheet, copied those rows to the clipboard, and then ran a program that tried to extract the data. The program used the
GetClipboardData
function to retrieve the data in Rich Text Format. What they found was that the call toGetClipboardData
was returningNULL
. Is there a maximum size for clipboard data?No, there is no pre-set maximum size for clipboard data. You are limited only by available memory and address space.
However, that’s not the reason why the call to
GetClipboardData
is failing.
Edge cases are so much fun to read about – they give so much insight into how certain things are done programmatically, even for a non-programmer such as myself.
To save anyone else from navigating the blog archive to find the follow-up articles (because they didn’t notice the next/previous links :-P):
– Part 1: https://devblogs.microsoft.com/oldnewthing/20220608-00/?p=106727 (i.e. what Thom linked)
– Part 2: https://devblogs.microsoft.com/oldnewthing/20220609-00/?p=106731
– Part 3: https://devblogs.microsoft.com/oldnewthing/20220610-00/?p=106737
Kuraegomon,
I’m looking for them and still don’t see them… oh wait now I see, they’re random links in the middle of the blog text. That’s a surprisingly bad way to do it!
It’s Microsoft. It should be expected!
They show up at the bottom as “Read Next”
Large yes, but I’ve seen bigger, haha. Using the clip board for that seems like a bad approach IMHO. I almost always convert to/from CSV and probably would not have thought of automating the clipboard. I’ll give him props for creativity 🙂
Go and ask any programmer the question: “If I asked you to write a program that copies a directory from one location to another, would you think it’s trivial?” Most programmers will answer “yes”. But here is a small selection of edge cases that could go wrong:
– Not enough space on target drive
– Maximum entries per directory exceeded for some directory on target
– Maximum file name length exceeded for some file on target
– Maximum path length exceeded for some file on target
– Source or target folder is a network location and the network failed
– Source folder is an optical disc and a CRC error happened.
– Source folder is an optical disc and the user just ejected
– Source folder is a USB drive and the user just pulled the USB cord.
– And last, but certainly not least, the antivirus thinks one of the files is a virus and the file just disappeared from your hands. This leads to all kinds of “interesting” issues , up to and including filesystem corruption.
This happened to me while backing up to Google drive. This somehow caused exFAT to revert to a previous version of its FAT (exFAT keeps two FATs), which lost some newly-added files, so I now am verifying everything from the second copy of the directory I keep on the laptop and then have to I’ll have to resync the entire folder (by unsyncing and re-adding it to the the list of folders to be backed up).
This highlights the need to have well-documented API calls and code samples that are meant to be used in production code, not just newbie examples. Unfortunately, no OS to date meets this fundamental requirement.
If you portray the problem as trivial, then the solution you are offered will also assume it trivial. Nobody would waste time on any of those issues mentioned unless there was a particular reason to.
And let’s not bs ourselves: copying a file or even a directory from one place to another IS trivial. If something goes wrong, then it means the copy operation would fail regardless of whether or not you were prepared for the problem.
This does not mean, however, that you can always just assume that it will succeed and not do any error checking possibly before trying to execute further actions.
And of course you forgot one of the most common issues: you’re copying a file from a modern filesystem to a FAT32 filesystem and the filename contains characters that are not allowed on FAT32.
Personally, I’m a bit smitten with Raymond Chen’s work, he is a technology advocate of exceptional quality. some time ago Chen did a series of blog posts on Powershell and scripting that not only changed the way I work, it change the way I think about tasks. At the time I was moving into more software development from the hardware side of things, and Chen’s insights gave me a new perspective on the software / firmware development process.
cpcf – a quick google search for “Raymond Chen PowerShell blog” isn’t leading me anywhere useful, and he’s been posting on his blog for _NINETEEN_ years, so his archives are _huge_ 😀
Is there any chance you can provide a link to any of the blogs in the series, or even give me a year (or year/month) of publication to help me narrow my search?
I promise if I can find this stuff I’ll link it, but for now I can’t find them either and unfortunately I didn’t keep a copy.
I’ve a strong recollection they might not have been part of his regular blog, because I learnt about Chen’s own site via a circuitous route after reading the articles. They linked into the early days version of the Doctor Scripto site, and then back to a site with more of Chen’s articles. I know a lot of this was reorganised a few years ago when roles changed, I suppose a lot of redundant stuff was kiboshed or links have disappeared.
I wouldn’t be surprised to find a lot of it in his book if you can find a copy at a reasonable price, but I haven’t read that either, it’s a todo thing!
https://devblogs.microsoft.com/search?query=powershell&blogs=%2Foldnewthing%2F&sortby=relevance