because MS Windows is so wedded to the Intel architecture that the concept of recompiling for a different architecture is alien to Microsoft developers. This is not an issue in the Unix/Linux world where different repositories for different architectures is the norm and developers do not bat an eyelid. Oh, well - maybe MS will catch up with *nix one day.
That's unfortunately not the case. In ye olden days of PowerPC, you'd regularly find things that couldn't port across due to various #fdefs or daft macros or what have you.
That was a while ago, but you can also see from people recompiling stuff for the new macOS ARM machines (which are, after all, Unix) and finding the same issues.
Yes: you do get people who write stuff without thought of portability, but that is very much the exception rather than the rule. Don't get me wrong: writing code that is portable between CPU architectures takes discipline: look at compiler warnings; take care on int sizes; worry about byte ordering;... but you get used to it and it becomes easier - I have been doing it for ~ 35 years. There are similar problems making code portable between different *nix and other operating systems - but people do it often.
But I'm working with huge load of technical debt where the orginal authors had no concept of portability, not just porting to new machines but portability of data across a network. And the bizarre thing is they already migrated some of the code from a different architecture. A lot of this is a mixture of self-taught programmers, never having to grow up in a Unix world where portability is a concern even at the application layer, and the startup model of getting code to work now instead of worrying about t
And we're not even talking about hand-coded assembly code that's written to accelerate certain portions of modern-day programs. There are some things that do better if you write it in pure assembly as versus hoping that your compiler is good enough. This code, of course, would have to be rewritten in ARM assembly and that's not always easy to do.
This is only necessary ... (Score:1)
because MS Windows is so wedded to the Intel architecture that the concept of recompiling for a different architecture is alien to Microsoft developers. This is not an issue in the Unix/Linux world where different repositories for different architectures is the norm and developers do not bat an eyelid. Oh, well - maybe MS will catch up with *nix one day.
Re: (Score:3)
That was a while ago, but you can also see from people recompiling stuff for the new macOS ARM machines (which are, after all, Unix) and finding the same issues.
Re:This is only necessary ... (Score:5, Insightful)
Yes: you do get people who write stuff without thought of portability, but that is very much the exception rather than the rule. Don't get me wrong: writing code that is portable between CPU architectures takes discipline: look at compiler warnings; take care on int sizes; worry about byte ordering; ... but you get used to it and it becomes easier - I have been doing it for ~ 35 years. There are similar problems making code portable between different *nix and other operating systems - but people do it often.
Re: (Score:3)
But I'm working with huge load of technical debt where the orginal authors had no concept of portability, not just porting to new machines but portability of data across a network. And the bizarre thing is they already migrated some of the code from a different architecture. A lot of this is a mixture of self-taught programmers, never having to grow up in a Unix world where portability is a concern even at the application layer, and the startup model of getting code to work now instead of worrying about t
Re: (Score:2)