Why take the trouble to initialize a static array's elements?

Unless initialized, the array, unlike a variable of any other type, contains junk and unpredictable values as the memory at that location was left untouched after the last operations. Initializing arrays ensures that the information therein has a distinct and predictable initial state.

Would you need to initialize the elements in a dynamic array for the same reasons as mentioned in the first question?

Actually, no. A dynamic array is quite a smart array. Elements in the array don't need to be initialized to a default value unless there is a specific reason related to the application that needs you to have certain initial values in the array.

Given a choice, would you use C-style strings that need a null-terminator?

Yes, but only if someone places a gun to your head. C++ std::string is a lot safer and supplies features that should make any good programmer stay away from using C-style strings.

Does the length of the string include the null-terminator at the end of it?

No, it doesn't. The length of string "Hello World" is 11, including the space and excluding the null character at the end of it.

Well, I still want to use C-style strings in char arrays defined by myself. What should be the size of the array I am using?

Here you go with one of the complications of using C-style strings. The size of the array should be one greater than the size of the largest string it will ever contain. This is essential so that it can accommodate for the null character at the end of the largest string. If "Hello World" was to be the largest string your char array would ever hold, then the length of the array needs to be 11 + 1 = 12 characters.

If you need to allow the user to input strings, would you use C-style strings?

No, as they are proven to be unsafe especially in handling user input, giving the user an opportunity to enter a string longer than the length of the array.

You forget to end your C-style string with a null-terminator. What happens when you use it?

Depending on how you use it. If you use it in a cout statement, for instance, the display logic reads successive characters seeking a terminating null and crosses the bounds of the array, possibly causing your application to crash.

Why do some programs use unsigned int if unsigned short takes less memory and compiles, too?

unsigned short typically has a limit of 65535, and if incremented, overflows to zero. To avoid this behavior, well-programmed applications choose unsigned int when it is not certain that the value will stay well below this limit.

My application divides two integer values 5 and 2:
int num1 = 5, num2 = 2;
int result = num1 / num2;
On execution, the result contains value 2. Isn't this wrong?

Not at all. Integers are not meant to contain decimal data. The result of this operation is hence 2 and not 2.5. If 2.5 is the result you expect, change all data types to float or double. These are meant to handle floating-point (decimal) operations.

I am writing an application to divide numbers. What's a better suited data type: int or float?

Integer types cannot contain decimal values that are possibly relevant for the user who wants to divide two numbers. So, you would use float.