mirror of
https://github.com/autistic-symposium/master-algorithms-py.git
synced 2025-04-30 04:36:08 -04:00
Update README.md
This commit is contained in:
parent
7f7d1bdf24
commit
1f174755c9
@ -23,6 +23,8 @@
|
|||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
* the difference between a hash set and a hash map is that the set can never have repeated elements.
|
||||||
|
|
||||||
* to implement a HashSet data structure, you need to implement:
|
* to implement a HashSet data structure, you need to implement:
|
||||||
- a hash function (to assign an address to store a given value), and
|
- a hash function (to assign an address to store a given value), and
|
||||||
- a collision handling (since the nature of a hash function is to map a value from a space A to a corresponding smaller space B).
|
- a collision handling (since the nature of a hash function is to map a value from a space A to a corresponding smaller space B).
|
||||||
@ -45,7 +47,7 @@
|
|||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
* a good choice for buckets is linked lists, as their time complexity for insertion and deletion is constant (once the position to be updated is located).
|
* a good choice for buckets is linked lists, as their time complexity for insertion and deletion is constant (once the position to be updated is located). you just need to be sure you never insert repeated elements.
|
||||||
* time complexicity for search is O(N/K) where N is the number of all possible values and K is the number of predefined buckets (the average size of bucket is N/K).
|
* time complexicity for search is O(N/K) where N is the number of all possible values and K is the number of predefined buckets (the average size of bucket is N/K).
|
||||||
* space complexity is O(K+M), where K is the number of predefined buckets, and M is the number of unique values that have been inserted in the HashSet.
|
* space complexity is O(K+M), where K is the number of predefined buckets, and M is the number of unique values that have been inserted in the HashSet.
|
||||||
* lastly, to optimize search, we could maintain the buckets as sorted lists (and obtain O(logN) time complexity for the lookup operation). however, insert and delete are linear time (as elements would need to be shifted).
|
* lastly, to optimize search, we could maintain the buckets as sorted lists (and obtain O(logN) time complexity for the lookup operation). however, insert and delete are linear time (as elements would need to be shifted).
|
||||||
@ -56,12 +58,24 @@
|
|||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
* another option for a bucket is a binary search tree, with O(logN) time complexity for search, insert, and delete.
|
* another option for a bucket is a binary search tree, with O(logN) time complexity for search, insert, and delete. in addition, bst can not hold repeated elements, just like sets.
|
||||||
* time complexity for search is O(logN/K), where N is the number of all possible values and K is the number of predefined buckets.
|
* time complexity for search is O(logN/K), where N is the number of all possible values and K is the number of predefined buckets.
|
||||||
* space complexity is O(K+M) where K is the number of predefined buckets, and M is the number of unique values in the HashSet.
|
* space complexity is O(K+M) where K is the number of predefined buckets, and M is the number of unique values in the HashSet.
|
||||||
|
|
||||||
<br>
|
<br>
|
||||||
|
|
||||||
|
|
||||||
|
#### implementing a hash map
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
* same as before, we need to tackle two main issues: hash funcion design and collision handling.
|
||||||
|
* a good approach is using a module function with an array or linked list. at this time, there is no constraint for repeated numbers.
|
||||||
|
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### examples
|
### examples
|
||||||
|
Loading…
x
Reference in New Issue
Block a user