+/* INTERFACE:
+
+ Hash tables are an implementation technique used to implement
+ mapping between objects. Provided a good hashing function is used,
+ they guarantee constant-time access and storing of information.
+ Duplicate keys are not allowed.
+
+ The basics are all covered. hash_table_new creates a hash table,
+ and hash_table_destroy deletes it. hash_table_put establishes a
+ mapping between a key and a value. hash_table_get retrieves the
+ value that corresponds to a key. hash_table_exists queries whether
+ a key is stored in a table at all. hash_table_remove removes a
+ mapping that corresponds to a key. hash_table_map allows you to
+ map through all the entries in a hash table. hash_table_clear
+ clears all the entries from the hash table.
+
+ The number of mappings in a table is not limited, except by the
+ amount of memory. As you add new elements to a table, it regrows
+ as necessary. If you have an idea about how many elements you will
+ store, you can provide a hint to hash_table_new().
+
+ The hashing and equality functions are normally provided by the
+ user. For the special (and frequent) case of hashing strings, you
+ can use the pre-canned make_string_hash_table(), which provides the
+ string hashing function from the Dragon Book, and a string equality
+ wrapper around strcmp().
+
+ When specifying your own hash and test functions, make sure the
+ following holds true:
+
+ - The test function returns non-zero for keys that are considered
+ "equal", zero otherwise.
+
+ - The hash function returns a number that represents the
+ "distinctness" of the object. In more precise terms, it means
+ that for any two objects that test "equal" under the test
+ function, the hash function MUST produce the same result.
+
+ This does not mean that each distinct object must produce a
+ distinct value, only that non-distinct objects must produce the
+ same values! For instance, a hash function that returns 0 for
+ any given object is a perfectly valid (albeit extremely bad) hash
+ function.
+
+ The above stated rule is quite easy to enforce. For example, if
+ your testing function compares strings case-insensitively, all
+ your function needs to do is lower-case the string characters
+ before calculating a hash. That way you have easily guaranteed
+ that changes in case will not result in a different hash.
+
+ - (optional) Choose the hash function to get as good "spreading" as
+ possible. A good hash function will react to even a small change
+ in its input with a completely different resulting hash.
+ Finally, don't make your hash function extremely slow, because
+ you're then defeating the purpose of hashing.
+
+ Note that neither keys nor values are copied when inserted into the
+ hash table, so they must exist for the lifetime of the table. This
+ means that e.g. the use of static strings is OK, but objects with a
+ shorter life-time need to be copied (with strdup() or the like in
+ the case of strings) before being inserted. */
+
+/* IMPLEMENTATION:
+
+ All the hash mappings (key-value pairs of pointers) are stored in a
+ contiguous array. The position of each mapping is determined by
+ applying the hash function to the key: location = hash(key) % size.
+ If two different keys end up on the same position, the collision is
+ resolved by placing the second mapping at the next empty place in
+ the array following the occupied place. This method of collision
+ resolution is called "linear probing".
+
+ There are more advanced collision resolution mechanisms (quadratic
+ probing, double hashing), but we don't use them because they
+ involve more non-sequential access to the array, and therefore
+ worse cache behavior. Linear probing works well as long as the
+ fullness/size ratio is kept below 75%. We make sure to regrow or
+ rehash the hash table whenever this threshold is exceeded.
+
+ Collisions make deletion tricky because finding collisions again
+ relies on new empty spots not being created. That's why
+ hash_table_remove only marks the spot as deleted rather than really
+ making it empty. */
+
+struct mapping {