C#’s Belt and Suspenders

In discussing my (fairly minor) annoyances with C# with a colleague, I finally figured out the nub of the problem. C# tries to protect you from certain bugs (overflowing bounds and buffers in particular) in so many different ways that the relevant warnings and fixes become irritations and noise, probably making code less reliable rather than more

E.g. why does assigning a double precision value to a float stop code from compiling? What’s the problem this solves?

Another example, why does moving values from unsigned to signed integer types stop compiles? The main reasons for worrying about integer overflows or getting a negative number when you expect a non-negative number is array bounds checking, but C# has array bounds checking, so why do this at all?

It seems to me that C# ought to treat all numbers with periods or exponents as double precision by default and allow assignment to float with implicit casting. This emphasizes correctness (precision) over performance by default, which is as it should be. Similarly, assigning an int to a uint should simply be allowed. (The compiler can insert code to throw an error if the value is negative if so desired.)

Now, I should say I’m using Unity’s (i.e. Mono’s) C# compiler here, so perhaps Microsoft deals with these annoyances at IDE level (e.g. by offering to automatically fix the relevant problems) in much the same way as Apple has leveraged compiler technology to provide automatic fixes, optimizations, and suggested improvements in XCode.