Intel’s Chief Engineering Officer Murthy Renduchintala is departing, part of a move in which a key technology unit will be separated into five teams, the chipmaker said on Monday.
Intel said it is reorganizing its technology, systems architecture and client group. Its new leaders will report directly to Chief Executive Officer Bob Swan.
Ann Kelleher, a 24-year Intel veteran, will lead development of 7-nanometer and 5-nanometer chip technology processes. Last week, the company had said the smaller, faster 7-nanometer chipmaking technology was six months behind schedule and it would have to rely more on outside chipmakers to keep its products competitive.
Heads were going to roll eventually after so many years of 10 nm and now 7 nm delays. Intel is in a very rough spot.
ex-CEO Andy Grove, who is associated with at least two famous technology management books, has left the building.
https://www.barnesandnoble.com/w/high-output-management-andrew-s-grove/1000970775
https://www.barnesandnoble.com/w/measure-what-matters-john-doerr/1127681175
Is it true that Intel uses vitality curve management? (AKA fire the lowest 10% every year, or something similar). When you look at employee reviews you see certain patterns of management indicating rot from the top.
I hadn’t heard, but google says you’re right. Stack ranking is barbaric. It was one of Jack Welch’s many “gifts” to the working world. Reengineering, managed earnings for the benefit of Wall Street, “#1 or #2 (in each line of business) or we’re out”, and PCBs in the Housatonic River were some the others.
Never mind that Jack Welch wanted to trim an overstaffed semi-obsolete conglomerate in a way that seemed fair, not run the leader in chip fabrication and design (which is what Intel used to be).
So… Intel uses stack ranking. That explains a lot.
In an industry where talent is scarce and employee knowledge is valuable, kicking out or “rehabilitating” 10% of the workforce (why not 5% or 20%?) is beyond silly. It only leads to a mass exodus of the most capable people and turns everyone else into paranoid psychos that try to game the ranking system and to make others fail by withholding information and help, since performance (and resulting ranking) is relative and it’s easier to make others fail than race ahead of them.
One of the reasons I stay in the company I am in is the fact it’s anti-ranking. We have 99 problems as a company (SAFe included) but ranking isn’t one.
CEOs tend to be sports fans and they’re probably taken by the sports analogy… teams need to relentlessly cull their weakest performers to make room for new talent. The problem is that for most knowledge workers outside of sales, there aren’t a lot of reliable metrics that can be used for objective evaluation that is comparable to let’s say, batting average or on-base percentage. The ones they’ve come up with for techies like LOC/day or closed scrum stories are subject to gamification and therefore fatally flawed. So now they’re creating a more intensely political environment.
I con’t speak for now, but they didn’t when I was there. The bottom 10% would be put on a “Corrective Action Plan” but that didn’t mean they would be fired. It would take multiple bottom 10% reviews before they would consider it, and they took a lot of mitigating factors into account. It wasn’t like MS when the bottom X% would just be culled.
It’s been a while since I worked there, and I don’t have that many friends left who work at intel (it is like the army, you either do a couple of tours there, or you are a lifer), but none of them have mentioned it to me
In 2015-2016, Brian Krzanich removed the “below expectations” grade and expanded the “improvement required” grade to 5% of the employee base. So, while less employees are subject to the CAP, the bottom 5% are very likely to end up leaving (either by quitting or by failing the more strict CAP for IR; and whoever gets his/her second IR in a row, well…)
Regardless of correction plan vs straight firing, this is forced distribution ranking. It creates a culture of fear and low transparency. Absolutely would cause a business to make promises management finds out it can’t keep at the last minute and then fires someone. And the cycle repeats. Alternatives are “Hire all A players” – Jobs, “Get the right people in the right seats on the bus” – Jim Collins. Both aim to get it right with no bottom 10% anything. Of course no position works out indefinitely. Regardless, these alternative philosophies creates a striving for greatness type culture where in theory at least everyone can win.
You may be on to something. How many books has the new leader written, or had ghost written?
Obviously this says more about a toxic corporate culture than it does about a firms capabilities.
Fundamentally there will be some bonus counting executive that has a peak wage dependant on cost cutting, who has held real progress back so that he can buy a new boat at Christmas.
Focused too heavily on cost, rather than progress!
And now, we will also pay the price:
https://www.extremetech.com/computing/313222-intel-amd-reportedly-fighting-for-capacity-at-tsmc
How did you come to that conclusion?
The article you linked speculates Intel will tap TSMC for their upcoming GPU chips, and Intel will absorb the slack created when TSMC was forced to cancel their Huawai contracts.
Honestly, this looks like a golden parachute for TSMC, and it ensures fab diversity for all of the fabless chip manufacturers.
Flatland_Spider,
I thought both would be outsourced to TSMC, although many sources aren’t clear about it.
I learned something new though:
https://wccftech.com/intel-ponte-vecchio-gpu-tsmc-6nm/
Previously I thought TSMC’s (and AMD’s) 7nm was better than intel’s 10nm, but apparently they don’t use the same methodology for process size. I had no idea the “x-nm” versus “y-nm” were not directly comparable…this makes things way more confusing.
I can see why it’s good for TSMC, but ensures fab diversity? I’d like to see less concentration in the semiconductor fab market.
I seem to remember something about Intel zigging and everyone else zagging, and zagging was the better road with the zig being part of Intel’s current fab problems. Or something like that.
There are 3 high end fab companies: Intel, Samsung, TSMC. TSMC struggling might lead to them exiting the high end fab market like Global Foundries did in ~2018. Fabs are capital and time intense ventures. Losing a step hurts; ask AMD.
Global Foundries is still around, but they aren’t competing to be on the cutting edge like the others.
TSMC struggling? They literally swamped with demand.
Don’t care, AMD chips are available at competitive price despite Intel’s struggles. Thanks AMD for remaining fair even when the competition is down.
Just imagine what Intel would have done in the same situation… (clue, they’ve done it in the past already)
Kochise,
I won’t defend intel’s negative behaviors.. But still, logically it doesn’t make sense not to care about viable competition. Now that AMD and Intel (and ARM vendors including apple) will be relying on the same fab to manufacturer chips, it’s likely to affect both prices and supply going forward.
There are some sellers I avoid because I don’t like them as much as others, yet I still recognize if they were to shut down and reduce the supply chain, it would negatively affect me through larger crowds and higher prices at the sellers I do like. In other words, even people who have a preference benefit from healthy competition. For this reason, I do care what happens to intel, even if my next build uses AMD.
For now. AMD pushed their prices up to reach parity with Intel the last time they had an advantage.
Pay companies to exclude AMD chips? Create the P4 or a next-gen architecture which fails miserably? Pick a DRAM technology which causes the industry violently recoil in disgust? Create a architecture logjam with all of their generations hitting at the same? Try to diversify into RAM and flash?
https://www.youtube.com/watch?v=Skry6cKyz50
Don’t assume that AMD prices are what they are because AMD execs have a sense of “fairness”. Every corporation prices their products according to what the market is willing to pay or what is needed to cover materials and R&D cost, whichever of the two is higher. There are edge cases related to product dumping (in consoles for example) and delusion (HTC, btw I say that as an HTC fan), but in general the rule of thumb is the one I mentioned in my previous sentence.
AMD is pricing their products the way they price them because they lack in single-core performance, because some old games have optimisation issues on AMD, and most importantly, because these are the prices their brand recognition can afford.
If only we didn’t have to worry about physics, or if only some silicon alternative would pay off.
Flatland_Spider,
Silicon became the material of choice because it was cheap and abundant, not necessarily because it was the best semiconductor. There are promising alternatives.
https://spectrum.ieee.org/semiconductors/materials/germanium-can-take-transistors-where-silicon-cant
It’s probably the case that it requires tons of investment before payoff. There’s risk and we might just hit similar barriers at scale. Somewhere I’ve read that we couldn’t make it significantly smaller, but it’s possible that alternative substrates could be clocked higher.
Photonic transistors could be interesting as well.
https://phys.org/news/2014-04-photonic-transistor-electronic.html
To be honest though I think there are gains to be had just by going back to the mindset of early software engineering when optimization wasn’t a luxory. We’ve been fortunate to have ultra-fast hardware to mask software inefficiencies, but it may not be sustainable as hardware improvements face diminishing returns. I see sequential CPUs continuing to become less relevant for high performance applications in favor of architectures with more parallelism. I’d be on board with general purpose CPUs with built in FPGAs and open source toolkits…somebody make it happen 🙂
Supposedly really old stuff is germanium, and geek folklore has it that’s what made the old electronics so good. 😉
I keep hearing about how graphene and carbon nano-tubes are the future.
Definitely.
I’ve worked in HPC, and it’s not the hardware. It’s the people writing the programs.
There are some really smart CS people in HPC, but most of the stuff is written by scientists who specialize in some domain that isn’t CS. They aren’t going to write highly parallel code unless there are frameworks or languages which make it dead simple.
Fortran does a really good job of making it easy to write highly parallel code.
Also, power density is a problem.
Supposedly Intel Xeons will show up with FPGAs.
https://www.eetasia.com/intel-doubles-down-on-ai-with-latest-xeon-scalable-fpga/
Yeah, I don’t know anyone in that industry, but what you are saying makes sense.
This is the first time I hear that, sounds awesome. I can see myself using it. I hope it’s actually affordable for people like myself. There are some low cost FPGA boards today but they’re older and not particularly powerful. The FPGA market needs scales of economy to kick in, it’s still too niche.
Everybody loves to dream about all the cool things they can do with FPGAs, and they never do anything.
You can do plenty of cool stuff with those cheap FPGAs, and there have been FPGAs with ARM cores for a while now.
The thing is that at then end of the day, there are very few very specialized use cases where high end FPGAs make financial and practical sense.
javiercero1,
I am not “everybody” I really enjoy the low level stuff 🙂 But I recognize that many software developers don’t extend their skillsets beyond high level coding.
Yeah, cheap FPGAs are commonplace in cheap products, but they aren’t really breaking much ground. Lack capacity and performance, don’t have high speed transceivers and can’t even keep up with USB3. It’s the high end FPGAs that are really interesting though.
I’d like to have one of these boards with more serious power.
https://www.xilinx.com/products/boards-and-kits/device-family/nav-virtex-7.html
I think they make lots of sense for both low power and low latency AI applications. However cutting edge FPGAs are still expensive and we need more FOSS tools to make them more accessible.
You can get a used high end FPGA development system from a couple of generations ago, for fairly cheap, they have lots of capacity and plenty of high speed interfaces.
javiercero1,
Sure, the high end FPGAs from 10 years ago are cheap, in particular things like the spartans. If you want to get your feet wet, you can do it, but applications I’d like to target need more memory, bandwidth, and performance than what you get with those cheap FPGAs.
Think about what’s happened with SBCs. For many years we’ve had dirt cheap microcontrollers, but they were never as powerful as a computer. However now in the era of raspberry pi, we have real functional computers with gigs of ram at $10-$20 price points… Wow! This is nothing short of amazing and I’d like to see the FPGA market undergo a similar transformation for both affordability and accessibility (in the same sense that raspberry pi made sbcs more accessible).
https://www.youtube.com/watch?v=sfTXZP2DB20