5-10 KLoC is an important waterline. In a 500 LoC throwaway, it's easy to juggle strings back and forth between loosely structured dictionaries and barely organized arrays while keeping mutable global state in randomly picked places and still get away with it safely1. 5-10 KLoC is when such soup becomes unmanageable and that's when author begins to shape it into crisp domain objects and clearly defined design patterns.
-- This is Haskell
data Person = Person {
    name :: String,
    age  :: Int
}

joe = Person "Joe Smith" 29
# This is Python
class Person(object):
    def __init__(self, name, age):
        self.name = name
        self.age  = age
 
joe = Person("Joe Smith", 29)
// And this is JavaScript
var joe = {
    name: "Joe Smith",
    age:  29
};
Take a look at the code snippets to the left and tell me what you see.
Well, there's no rocket science involved, it's simply creating a tiny value object in three different languages. One thing that you should have noticed is that the amount of structure is the same. In all three cases there is a data type with two attributes, one of them representing person's name and the other's for person's age.
The point I'm making here is that regardless of whether you program in JavaScript or PHP or Python or Ruby, your program doesn't magically become "untyped" or "typeless" so you could leave all that typing burden behind you and be happy.
There always is a certain domain model of your problem, and it's there even before you write a first line of code2. Also, there always is a certain component model of your program, and it's there as soon as you wrote a first line of code. These are assumptions about presence and structure of things, and typically you expect them to be aligned across the codebase.
These assumptions are types, and you're not gonna avoid having them. The difference is whether or not to have the proper tooling support to define them and to automatically validate the consistency.
For an example, consider that you are writing a function that queries a database. In case of success, it, quite obviously, should return a value object made of database output. What do you think it should do in case of failure? One option is to throw an exception. Another one is to return null. Yet another one is to return an empty value object with (or without) a "not fully initialized" flag.
I'm not asking a rhetorical question of what is the right way to signal a problem just because there is no obviously wrong way to do it, all three may be viable in certain situations.
But now consider a more sophisticated case when function makes two queries, one to fetch "essential" data (e.g. content) and the other to get "optional" information (e.g. advertisements), and you think it's a good idea to (a) return a complete value object when both succeed, (b) throw an exception when first one fails or (c) return a stripped-down value object when "essential" fetch succeeds and "optional" one fails.
The next question is: how the user of this function is supposed to figure out those three alternatives?
By the way, in my opinion this is the most important question of the whole domain of software engineering: how to communicate the technical design ideas between different people?3
This is a big theme that deserves a separate essay, for now let's focus on a specific question of how to inform the user of this function about it's non-trivial codomain.
In a statically typed language like Haskell it's relatively easy by declaring the return type as something along the lines of Either (Essential, Maybe Optional) ErrorMessage. That would both force the function's invoker to properly handle different cases while traversing this data structure, and also force the function's author to confine herself to these three options and not throw in some "smart hacks" for edge cases. In a statically typed language like Java this pattern is also implementable (though it might involve writing substantially more boilerplate code compared to Haskell).
In a dynamically typed language like Ruby or JavaScript, pretty much the only available option is to explain these behavior specifics in API documentation and then hope for the best.
And API documentation (or however you call the thing that explains the code's internal life) is a quirky beast.
First, it always exists. Proof: there is something that you give to a new starter in your project to figure out what's going on. It might be a full-fledged document, inline interface descriptions (i.e. Javadocs/Docstrings/Haddocks etc. + hopefully meaningful naming of things), oral lore, unit tests or even the source code itself. This is your spec.
Second, it always follows a standard. Proof: unless you have enough imagination (and dedication) to code every function and write every piece of documentation in entirely unique way, there are some rules and principles that you gravitate to. They are your standard.
Third, it's always better when a standard you use is formal and widely adopted. Personal sense of taste is okay; project-wide coding guidelines are better; shared tradition of the whole community of language's users is great; and principles embedded into the language itself are the best4.
Fourth, API spec is virtually never entirely in sync with the source code unless it is the source code. Documentation may be outdated and silent on features that were introduced after the latest "we must write documentation!" rush. Test suite may be shallow and optimistic and not exploring every intricate edge case. Memories may fade. Only code never lies.
Given all that, it's only natural to wish to express as much as possible of program's semantics using the native constructs of a programming language, and this idea of expressiveness has nothing to do with static typing per se. It's just how our brains operate. It's more convenient for any of us to reflect about a "list of transactions" rather than about a "variable with some stuff in it, maybe list, hopefully of transactions," and this is what makes function
def calculate_aggregate_statistics(list_of_transactions):
    statistics = Statistics()
    for transaction in list_of_transactions:
        ...
    return statistics
more readable than
def calculate(x):
    ret = {}
    for i in x:
        ...
    return ret
Again, remember, your program is inherently typeful. By and large, the only choice you have is whether you write down the types of variables and functions once, or you'll have to reinfer them again next time you will see this piece of code.
But, once your program's type model is developed enough to enable you to instantly see during the cozy just-after-lunch code review that
    ...
    xml_content = parse_xml(text_data)
    statistics = calculate_aggregate_statistics(xml_content)
    ...
is wrong and that
def calculate_aggregate_statistics(list_of_transactions):
    statistics = Statistics()
    for transaction in list_of_transactions:
        ...
        if do_smart_workaround():
            return "Haha, didn't expect to see me?"
        ...
    return statistics
is wrong either, then it feels very logical to delegate this tedious work to the instrumental toolchain... Which brings your straight into the club of statical typers, welcome!
No, seriously, if you write clean code, if your data model is well thought-out, if domains/codomains of your functions are clearly defined, then your code is effectively static typed already. Using an appropriate language would merely give you an opportunity to express all of that in a formal way. Then, as a birthday gift, you get automated checking of consistency5 that allows you to spend less time carefully reviewing code and instead put effort into more valuable things.
On the other hand, if your code is crap, then blaming the compiler for refusing to compile is... quite natural.
P.S. You might argue that having automated tests would also help to check consistency and correctness. That's another big theme that deserves separate essays.
1 Corollary: any success story regarding "how easy it was to produce a 500 LoC program in language X" is largely irrelevant unless it's targeted at non-developers (e.g. physicists coding their models in Python etc). Professional developer can successfully program a 500 LoC program in anything.
2 For example, if you develop a banking application, you are going to have things like "account", "client" and "transaction."
3 Also keep in mind that "yourself now" and "yourself in six months" are often "different enough" to make it a problem even for single-person projects.
4 Say, in Java camel-casing is a shared tradition and lack of multiple inheritance is an embedded principle.
5 Though, strictly speaking, not of correctness. I.e. it doesn't catch all the bugs, only a particular subset of them.