Skip to content
This repository was archived by the owner on Dec 11, 2020. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
Binary file added Leetcode Workshops/.DS_Store
Binary file not shown.
Binary file added Leetcode Workshops/Week 1/.DS_Store
Binary file not shown.
Binary file not shown.
54 changes: 54 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@


**Type:** _Text + image_

**Title:** Time complexity

**Note: **Code and graph on the right of the screen and text on the left

**Content:**

**Time complexity** allows us to describe **how the time taken to run a function grows as the size of the input of the function grows.**

In our function, `valueSum()`. The amount of time this function takes to run (execution time) will grow as the number of elements (n) in the array increase. If the elements in the array `value` went up to 1,000,000, the amount of the time it would take for the function to compute would be much higher than if the array only went up to 10.

```
value = [1, 2, 3, 4, 5]

def valueSum(value):
sum = 0 #c1 runtime
for i in value:
sum = sum + i #c2 runtime
return sum #c3 runtime
```

Adding all the individual runtimes, we'll get `f(n) = c2*n + c1 + c3`. Look familiar? It's in the form of a linear equation `f(n) = a*n + b` where a and b are constants. We can predict that `valueSum()` grows in linear time.

Here it is linear -->

[//]: # "insert 'timecomplexity' image"

<img src="https://projectbit.s3-us-west-1.amazonaws.com/darlene/labs/Screen+Shot+2020-02-21+at+5.27.32+PM.png">



---

_Explanation of code 1_

`sum = 0` runs in constant time as it is simply assigning a value to the variable `sum` and only occurs once. For this line, the runtime is **c1**.

The for loop statement expresses a variable `i` iterating over each element in `value`. This creates a loop that will iterate *n* times since `value` contains *n* elements.

`sum = sum + i` occurs in constant time as well. Let's say this line runs in *c2* time. Since the loop body repeats however many times the loop is iterated, we can multiply *c2* by *n*. Now we have **c2 \* n** for the runtime of the entire for loop.

Lastly, `return sum` is simply returning a number and only happens once. So, this line also runs in constant time which we'll note as **c3**.

Time complexity answers the question: "At what rate does the time increase for a function as the input increases". **However**, it does not answer the question, "How long does it take for a function to compute?", because the answer relies on hardware, language, etc.

For time complexity, functions can grow in **constant time**, **linear time**, **quadratic time**, and so on.





29 changes: 29 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
Type: text +img

**Title:** Big Omega and Big theta

### Big Omega

Similar to **Big (O)**, we also have **Big Omega**. **Big Omega** is just the opposite; it is the lower bound of our function.

Let's look at our chocolate example again. The same conditions hold true. We can establish the lower bound of how much chocolate you have to be 3/4. Why is this a valid lower bound? Because you will eventually exceed having 3/4 of the chocolate. **Remember when looking for bounds, we want one that holds true after a certain point and not necessarily from the beginning.**

In terms of Python, let's say we have a `functionC` that runs on **O(n)** time. If we have `functionC` which grows on **O(n)** time, that would function as the **Big Omega** for our function, `functionA`.

[//]: # "insert 'functionC vs functionA' image"

`functionA`'s runtime grows faster than `functionC` after a certain point. After that certain point, we know with absolute certainty that the runtime of `functionA` will never be faster than `functionC`.

### Big Theta

Lastly, we have **Big Theta**. Big Theta is simply **Big O**'s formal name. **Big Theta** is the average runtime of a function. Going back to when we were determining the **Big O** notation for a function, we would write each line in the function in **Big O notation**.

For example,

$$
Time(Input) = O(1) + O(n) + O(1).
$$


We then dropped every term except the fastest growing term (**O(n)**), and made that term define the function. By dropping all the terms, we are estimating the runtime of the function and finding our **Big Theta**.

28 changes: 28 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
**Type:** _Text + img_

**Title:** _In what time does a function grow?_

Content:

```python
value = range(6)

def three(value):
sum = 0 #c1
return(sum) #c2
```

`sum = 0` only repeats once, so we know this line will take a constant amount of time **c1**. `return(sum)` also only repeats once, so we can infer that this line carries out in constant time **c2**.

Hence, we predict that `three()` grows in **constant time**.

[![img](https://camo.githubusercontent.com/59efcc40cb28ba7c6680b89f555239d26fe4ee12/68747470733a2f2f70726f6a6563746269742e73332d75732d776573742d312e616d617a6f6e6177732e636f6d2f6461726c656e652f6c6162732f53637265656e2b53686f742b323032302d30322d32312b61742b352e33302e30312b504d2e706e67)



---

**_code 1 explained_**

Our prediction is true. Since both lines take constant time, adding them up is also still constant time, and our graph looks like a straight line.

38 changes: 38 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
**Type:** _Text+img_

**Title:** _in what time does the function grow ?_

```python
keypad = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[0]]
def listInList(keypad):
sum = 0
for row in keypad: #a*n^2 !Nested loop!
for i in row: #b*n
sum += i
return sum #c
```

Since our function has a line that repeats itself n^2 times, we predict that this function grows in **quadratic time**. Looking at the graph, we see our prediction is indeed correct. Quadratic runtime starts growing quite quickly in comparison to a linear runtime. Notice that the equation for a quadratic equation is **a\*n^2\*+b\*n\*+c** where a, b and c are constants.



[![img](https://camo.githubusercontent.com/204ae4fc13b58550585a953739c400953d8d7c92/68747470733a2f2f70726f6a6563746269742e73332d75732d776573742d312e616d617a6f6e6177732e636f6d2f6461726c656e652f6c6162732f53637265656e2b53686f742b323032302d30322d32312b61742b352e33302e30372b504d2e706e67)](https://camo.githubusercontent.com/204ae4fc13b58550585a953739c400953d8d7c92/68747470733a2f2f70726f6a6563746269742e73332d75732d776573742d312e616d617a6f6e6177732e636f6d2f6461726c656e652f6c6162732f53637265656e2b53686f742b323032302d30322d32312b61742b352e33302e30372b504d2e706e67)



---

Again, we will be trying to predict what time this function grows in.

If we divide the function into parts, we get the lines `sum = 0`, `sum += i`, and `return sum` . `sum = 0` repeats once, so the time for this line to process is constant.

`sum += i` is in a *for loop* so it might be intuitive to think that this line only repeats *n* times. However, this for loop is nested within another for loop, so the total number of iterations would be *n^2*.

Lastly `return sum` repeats once, so this line will be processed in constant time.

If we graph this out, it will look like the graph below:

Looking at the graph, we see our prediction is indeed correct. Quadratic runtime starts growing quite quickly in comparison to a linear runtime. Something else to note is that the equation for a quadratic equation is **a\*n^2\*+b\*n\*+c** where a, b and c are constants.
29 changes: 29 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
**Type:** _code centered_

**Title:** How can we tell the time complexity just from the function?

**Content:**

To summarize briefly so far, the time of an algorithm or function's run time will be the sum of the time it takes for all the lines in the code of the algorithm to run. This runtime growth can be written as a function.

We can use the fastest growing term to determine the behavior of the equation; in this case, it is the **c2 * n** term. If we simply look at `Time(Input) = c2*n`, we know easily that the function is linear and that the runtime of `valueSum()` is *linear*.

```python
value = [1, 2, 3, 4, 5]

def valueSum(value):
sum = 0 #c1
for i in value: #n*c2
sum = sum + i
return sum #c3
```



---

Remember that elementary functions such as `+, -, *, \, =` always take a constant amount of time to run. **Thus, when we see these functions, we assign them the value *c* time.**

From before, we analyzed `valueSum()` line by line and got `f(n) = c2*n + c1 + c3` when we added everything. This equation is actually a function of time where *n* is the input size and *f(n)* is the runtime; we can treat *f(n)* as *Time(Input)*.


34 changes: 34 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
**Type:** _code left/right_

**Title:** Finding time complexity

_insert slide content here_

Example 2:

Let's look at the function `three` again. We can treat *c1 + c2* as one constant and call it **c3**. Now we have `T(I) = c3`. This makes it very obvious that the runtime of `three()` is *constant*.

```python
value = range(6)
def three(value):
sum = 0 #c1
return(sum) #c2
```

Example 3:

Let's do the same for `listInList()`.Putting it into a function, we get `Time(Input) = n^2 + c1 + c2`. If we isolate the fastest growing term, we get `T(I) = n^2` which reveals the runtime to be *quadratic*.

```python
keypad = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[0]]
def listInList(keypad):
sum = 0 #c1
for row in keypad:`
for i in row: #n^2
sum += i
return sum #c2
```

70 changes: 70 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/6.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
**Type:** _link+code_

**Title:** _Space Complexity_

_insert slide content here_

Similar to time complexity, as functions' input grows very large, they will take up an increasing amount of memory; this is **space complexity**. In fact, these functions' memory usage also grows in a similar fashion to time complexity and we describe the growth in the same way. We describe it as *linear*, *quadratic*, etc. time.

``` python
value = [1, 2, 3, 4, 5]

def valueSum(value):
sum = 0 #c1
for i in value: #c3*n
sum = sum + i
return sum #c2
```

To describe the amount of memory this function will require, we will write an expression similar to what was done for *time complexity*. We will call our expression: *Space(Input)* as space is a function of the input.

Our Space(Input) equation should look like:

$$
Space(Input) = c1 + c2 + c3*n
$$
Our equation represented by the dominant (fastest growing) term would simply be **c3 * n**.

**Lets look at another familiar function, `listInList`**

```python
keypad = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[0]]

def listInList(keypad):
sum = 0
for row in keypad:
for i in row:
sum += i
return sum
```

We work with two numbers, `i` and `sum`. These require **c1** and **c2** amount of memory.

Each element in `keypad` will require **c3** amount of memory. We can conclude that the array `keypad` requires **c1 * n^2** amount of memory.

The Space(Input) equation for `keypad` looks something like this.

$$
Space(Input) = c1 + c2 + c3*n^2
$$
Our equation represented by the dominant term would be **c3 * n^2**

**Note:**Prioritize optimizing function run time over optimize memory

---

Let's try and find the Space(Input) equation for this function. To start, let's break down the function line by line. We are working with integers `sum` and `i`, with space values **c1** and **c2** respectively. We have a complex data structure, `value = [1, 2, 3, 4, 5]`, so we know that the amount of memory is **c3 * n** where *n* is the length of the range, `valueSum`.

Similar to time complexity, the amount of memory `keypad` will need is **c1 * n * n**. In other words, **c1 * n^2** Primitive values such as integers, floats, strings, etc. will always take up a constant amount of memory. Complex data structures such as `arrays` take up `k * c` amount of memory where *c* is an amount of memory and *k* is the number of elements in the array. Going back to our `valueSum` function...

## Optimize Time or Optimize Memory?

The greater the input of a function, the greater amount of time and memory that the function will require to operate. A common question is **"Is it better to have our function run faster, but require more space; or should we have our function require less space, but run slower?"**

We can either have our function run faster, but require more memory or our function run slower, but require less memory.

The general answer is to have our function run faster. It is always possible to buy more space/memory. It's impossible to buy extra time. Hence, the general rule is to **prioritize** having our function/algorithm run faster.

52 changes: 52 additions & 0 deletions Leetcode Workshops/Week 1/Act1_TimeAndSpaceComplexity/7.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
**Type:** slide text

**Title:** Big - O notation

<!--title={Big-O Notationn}-->

Now that we have an understanding of *time complexity* and *space complexity* and how to express them as functions of time, we can elaborate on the way we express these functions. **Big-O Notation** is a common way to express these functions.

Instead of using terms like *Linear Time*, *Quadratic Time*, *Logarithmic Time*, etc., we can write these terms in **Big-O Notation**. Depending on what time the function increases by, we assign it a **Big-O** value. **Big-O** notation is incredibly important because it allows us to take a more mathematical and calculated approach to understanding the way functions and algorithms grow.

**Big-O notation** is also useful because it simplifies how we describe a function's time complexity and space complexity. So far, we have defined a function of time as a number of terms. With Big-O notation, we are able to use only the dominant, or fastest growing, term!

## How to Write in Big-O Notation

To write in **Big-O Notation** we use a capital O followed with parentheses : **O()**.

Depending on what is inside of the parentheses will tell us what time the function grows in. Let's look at an example.

```python
value = [1, 2, 3, 4, 5]

def valueSum(value):
sum = 0
for i in value:
sum = sum + i
return sum
```

Going back to our `valueSum` function, we had previously expressed the runtime of this function as input increase. We had written the expression *Time(Input)* and the components its made of.

**Time(Input) = c1 + c2*n + c3**

`c1` comes from the line `sum = 0` . We know `sum = 0` will always take the same amount of time to run because we are assigning a value to a variable. We call this *constant time* because no matter what function this line is in, it will take the same amount of time to run.

Since **c1** runs in constant time, we would write it as **O(1)** in **Big-O Notation**. Notice that the line repeats only once.

`sum = sum + i` is responsible for **c2** in our equation. We multiply **c2** by *n* however because **c2** is repeated multiple times. The runtime of this function increase linearly depending on the amount of elements that are added to `sum`.

In **Big-O Notation**, this line would be rewritten as **O(n)** and would inform us that this line runs in *linear time*. We use the dominant term which is **c2 * n**. Since **c2 * n** and **n** behave the same (linearly), we can drop the coefficient of *c2*.

Lastly, `return sum` is written as **O(1)** because it operates under constant time.

Rewriting our `valueSum` function but in **Big-O notation**, we would have:

```
Time(Input) = O(1) + O(n) + O(1).
```

We've written the lines of the function in **Big-O Notation**, but now we need to write the function itself in **Big-O Notation**. The way we do that is to choose the term that is growing the fastest in the function (the dominant term) as *n* gets very large. We then disregard any coefficients of that term. Then assign that term for the function.

For `valueSum`, the fastest growing term is **c2 * n** . If we ignore **c2** , we are left with just `n`. We now know that the time complexity of `valueSum` is **O(n)** and the runtime of `valueSum` grows in a linear fashion.

Loading