Skip to content

Added coloring #26

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 36 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
71b9d5a
url configured
MeliAndryut Feb 28, 2023
1171ef0
feat: Add masking and visual illusions tasks
dmateusb Mar 15, 2023
f65758e
Merge pull request #1 from Andryut/feature/david
Andryut Mar 15, 2023
305bf6c
feat: Add images
dmateusb Mar 15, 2023
c922632
Merge pull request #2 from Andryut/feature/david
Andryut Mar 15, 2023
6985dba
New exercises pages
MeliAndryut Mar 15, 2023
9b24748
Añadir ultimos ejercicio mio y los de Andres
dmateusb Mar 15, 2023
f7ccd7d
Masking, mach bands and Depth perception added
MeliAndryut Mar 15, 2023
ebe3457
Merge branch 'main' into feature/david
Andryut Mar 15, 2023
130c60b
Merge pull request #3 from Andryut/feature/david
Andryut Mar 15, 2023
3960663
Imagenes y otras cositas
MeliAndryut Mar 15, 2023
307c46f
Layout changes added
MeliAndryut Mar 27, 2023
83c7e06
Layout changes added
MeliAndryut Mar 27, 2023
6e0c9bc
Color modes pictures added
MeliAndryut Mar 27, 2023
863c066
Color modes research added
MeliAndryut Mar 27, 2023
d7cac9e
Mach Bands and Masking updated
MeliAndryut Mar 28, 2023
b150f84
Depth perception parallax updated
MeliAndryut Mar 28, 2023
1735a28
Make all prettier
dmateusb Mar 29, 2023
3634f51
Prompts added to every exercise file
MeliAndryut Mar 29, 2023
3a736f0
Team info added
MeliAndryut Mar 29, 2023
60c29ca
feat, add explications and code sinpeets
dmateusb Mar 31, 2023
017fb6f
Merge branch 'main' of https://github.com/Andryut/showcase into featu…
dmateusb Mar 31, 2023
02b4f3d
Added coloring
aortegal1 Apr 11, 2023
aa0f6d8
Merge pull request #4 from Andryut/feature/david
Andryut Apr 12, 2023
99af275
Fixing merge problems with last fetch
aortegal1 Apr 13, 2023
896cfb0
Added temporal coherence
aortegal1 Apr 14, 2023
5f6d706
Merge branch 'main' of https://github.com/aortegal1/showcase
aortegal1 Apr 14, 2023
ecaafb8
fix merge issues
aortegal1 Apr 14, 2023
8eab285
fix
aortegal1 Apr 14, 2023
0e05545
ignore toml
aortegal1 Apr 14, 2023
f77142b
tom deleted
aortegal1 Apr 14, 2023
2778584
deleted
aortegal1 Apr 14, 2023
61729e5
deleted
aortegal1 Apr 14, 2023
d96688c
deleted
aortegal1 Apr 14, 2023
0126581
added index
aortegal1 Apr 14, 2023
376733c
Merge branch 'main' into Coloring-branch
aortegal1 Apr 14, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
/public/
/resources/_gen/
# Backup Files
*.md~
# Logs
Expand Down
13 changes: 10 additions & 3 deletions content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,20 @@ title: Introduction
type: docs
---

# Showcase Template
# Visual Computing 2023-1
This is a workbook for the Visual Computing class in the National University of Colombia.

Welcome to the [gohugo](https://gohugo.io/) template to create rich content [academic reports](https://www.wordy.com/writers-workshop/writing-an-academic-report/) having [p5.js](https://p5js.org/) sketches.
# Team
* Andrés Ortega -
* David Mateus - [email protected]
* Andryut Huertas - [email protected]

## Hacking

Create a github [user account](https://docs.github.com/en/get-started/signing-up-for-github/signing-up-for-a-new-github-account) or [organization](https://docs.github.com/en/organizations/collaborating-with-groups-in-organizations/creating-a-new-organization-from-scratch), and install [git](https://git-scm.com/) and the [gohugo](https://gohugo.io/) [static site generator](https://jamstack.org/generators/) then:
Welcome to the [gohugo](https://gohugo.io/) template to create rich content [academic reports](https://www.wordy.com/writers-workshop/writing-an-academic-report/) having [p5.js](https://p5js.org/) sketches.

Install the [gohugo](https://gohugo.io/) [static site generator](https://jamstack.org/generators/) then:


1. Login into GitHUb and create a new repo [from this template](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-repository-from-a-template#creating-a-repository-from-a-template) into your user account or organization. **Don't rename the repo but leave it as 'showcase'**.
2. Grant [read and write permissions](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#configuring-the-default-github_token-permissions) to your newly created repo workflow. **Observation:** If you're deploying an organization site this permission need to be granted within the organization settings.
Expand Down
89 changes: 89 additions & 0 deletions content/docs/exercises/Coloring.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# Coloring for colorblindness
>**Prompt:** Implement a color mapping application that helps people who are color blind see the colors around them.
>
In order to assess this problem, first we need to understand colorblindness and later, pick one of the many forms this may manifest.

## What is colorblindness?
Color blindness is classified according to the number of _color channels_ that are available to a person to conveying color information into: [monochromacy](https://en.wikipedia.org/wiki/Monochromacy), [dichromacy](https://en.wikipedia.org/wiki/Dichromacy) and anomalous trichromacy. Refer to the [Ishihara test](https://en.wikipedia.org/wiki/Ishihara_test) for a color vision dificiency test.

For this particular P5.js implementation, I choose ***deuteranopia*** as my colorblindess type of interest. **Deuteranopia** is a kind of green color blindness that nullifies the person's ability to recognize the color green, thus their color spectrum goes only from blue hues to yellow ones, as shown bellow.


![undefined](https://upload.wikimedia.org/wikipedia/commons/5/5a/Rainbow_Deuteranopia.svg)

Now, diving into the P5.js implementation, I used a fairly simple mapping function as the backbone of the color mapping process. This mapping function recieves an image as input, to process the image in the three color channels first we need to load them into a array. This is achieved by the next function:


newImg.loadPixels();

Now that the pixels are loaded to an array it is possible to separate them by the color channels given the RGB order. After having the individual pixels in their respective channels we can finally map them into ***deuteranopia-friendly*** colors thanks to the next function:

// deuteranopia-friendly color transformation function
let rNew = 0.625 * r + 0.375 * g + 0.001 * b;
let gNew = 0.7 * r + 0.3 * g + 0 * b;
let bNew = 0 * r + 0.3 * g + 0.7 * b;

The specific values used in the color mapping function are derived from studies [1] on the spectral sensitivity of the deuteranopia visual system. By applying these values to the original RGB color, the resulting color is transformed to a deuteranopia-friendly color that is easier for individuals with deuteranopia to differentiate from other colors.

It's important to note that the color mapping function I provided assumes a moderate degree of deuteranopia and may not be appropriate for all cases of deuteranopia. Additionally, color perception can vary from person to person, so the resulting transformed color may not be perfectly accurate in simulating protanopia for all individuals.

After this transformation takes place the image is rebuilt with the new pixels and it´s displayed besides the original one.

Finally here's the complete P5.js implementation:

[1] Stockman, A., Sharpe, L. and Vries, R. (1999) _Color Vision: From Genes to Perception_. Cambridge University Press.

{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/1.js" width="500" height="540" >}}

## [Color modes](https://ww2.lacan.upc.edu/doc/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_color_models.htm)
>**Prompt:** Research other color models such as HSL, HSB, XYZ.
>

### HSB, and HSL Color Models

The HSL (hue, lightness, saturation) and HSB (hue, saturation, brightness) color models were developed to be more “intuitive” in manipulating with color and were designed to approximate the way humans perceive and interpret color.

Hue defines the color itself. The values for the hue axis vary from 0 to 360 beginning and ending with red and running through green, blue and all intermediary colors.

Saturation indicates the degree to which the hue differs from a neutral gray. The values run from 0, which means no color saturation, to 1, which is the fullest saturation of a given hue at a given illumination.

Intensity component - lightness (HSL) or brightness (HSB), indicates the illumination level. Both vary from 0 (black, no light) to 1 (white, full illumination). The difference between the two is that maximum saturation of hue (S=1) is at brightness B=1 (full illumination) in the HSB color model, and at lightness L=0.5 in the HSL color model.

The HSB color space is essentially a cylinder, but usually it is represented as a cone or hexagonal cone (hexcone) as shown in the Figure "HSB Solid", because the hexcone defines the subset of the HSB space with valid RGB values. The brightness B is the vertical axis, and the vertex B=0 corresponds to black color. Similarly, a color solid, or 3D-representation, of the HSL model is a double hexcone (Figure "HSB Solid") with lightness as the axis, and the vertex of the second hexcone corresponding to white.

Both color models have intensity component decoupled from the color information. The HSB color space yields a greater dynamic range of saturation.

#### HSB Solid
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/HSV.js" width="500" height="540" >}}

#### HSL Solid
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/HSL.js" width="500" height="700" >}}

### XYZ Color Model
The XYZ color space is an international standard developed by the CIE (Commission Internationale de l'Eclairage). This model is based on three hypothetical primaries, XYZ, and all visible colors can be represented by using only positive values of X, Y, and Z. The CIE XYZ primaries are hypothetical because they do not correspond to any real light wavelengths. The Y primary is intentionally defined to match closely to luminance, while X and Z primaries give color information. The main advantage of the CIE XYZ space (and any color space based on it) is that this space is completely device-independent. The chromaticity diagram in Figure ["CIE xyY Chromaticity Diagram and Color Gamut"](https://ww2.lacan.upc.edu/doc/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_cie_diagram.htm#fig6-1) is in fact a two-dimensional projection of the CIE XYZ sub-space. Note that arbitrarily combining X, Y, and Z values within nominal ranges can easily lead to a "color" outside of the visible color spectrum.

The position of the block of RGB-representable colors in the XYZ space is shown in Figure "RGB Colors Cube in the XYZ Color Space".
#### RGB Colors Cube in the XYZ Color Space
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/XYZ.js" width="500" height="400" >}}

Use the following basic equations [Rogers85], to convert between gamma-corrected R'G'B' and CIE XYZ models:

```
X = 0.412453*R' + 0.35758 *G' + 0.180423*B'
Y = 0.212671*R' + 0.71516 *G' + 0.072169*B'
Z = 0.019334*R' + 0.119193*G' + 0.950227*B'
```

The equations for X,Y,Z calculation are given on the assumption that R',G', and B' values are normalized to the range [0..1].

```
R' = 3.240479 * X - 1.53715 * Y - 0.498535 * Z
G' = -0.969256 * X + 1.875991 * Y + 0.041556 * Z
B' = 0.055648 * X - 0.204043 * Y + 1.057311 * Z
```

The equations for R',G', and B' calculation are given on the assumption that X,Y, and Z values are in the range [0..1].

[1] Stockman, A., Sharpe, L. and Vries, R. (1999) _Color Vision: From Genes to Perception_. Cambridge University Press.

[Rog85] David Rogers. Procedural Elements for Computer Graphics. McGraw-Hill, 1985.
11 changes: 11 additions & 0 deletions content/docs/exercises/Depth Perception.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
## Parallax effect
>**Prompt:** Take advantage of monocular cues to implement a 2D sketch to trick the eye into perceiving a 3D scene.
>

Here we have the parallax effect, tricking our eye into perceiving depth between objects in a 2-D plane.

Moon is following you at a slow speed and when the ball moves, the perspective of the buildings change.

On this sketch you can change the speed of the movement using left and right arrow keys.

{{< p5-iframe sketch="/showcase/sketches/exercises/depth_perception/parallax.js" width="430" height="430" >}}
33 changes: 33 additions & 0 deletions content/docs/exercises/Mach Bands.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
## Mach Bands
>**Prompt:** Develop a terrain visualization application. Check out the 3D terrain generation with Perlin noise coding train tutorial.
>
### [Voronoi diagram](https://en.wikipedia.org/wiki/Voronoi_diagram#:~:text=In%20mathematics%2C%20a%20Voronoi%20diagram,%2C%20sites%2C%20or%20generators).)
In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects.

### Implementation

In the following p5 sketch we mixed Perlin terrain generation, mach bands and voronoi diagrams.

First we create a set of random points in a plane, then we compute the voronoi diagram for this set of points using the [gorhill](https://github.com/gorhill/Javascript-Voronoi) javascript library.

- You can see the resulting Voronoi diagram using the toggle control "voronoi diagram".

Then we add perlin noise over the plane, this noise is going to be mixed with the inverse of the distance between each point and its respective polygon in the voronoi diagram.

- You can see the terrain generated with the inverse of the distance to the center of each polygon using the toggle control "perlin"

- When the "perlin" checkbox is marked, then the sketch shows the resulting perlin noise over the voronoi diagrams.

Additionally, to improve visualization, we have two more toggle controls.
- "stroke" shows the stroke in the triangles that generates the surface.
- "band" make a coloring based on Mach bands for each triangle.

The first slider changes the density or scale of the triangles, and also amplifies the terrain keeping the Voronoi diagram information.

The second slider changes the speed of rotation.

{{< p5-iframe sketch="/showcase/sketches/exercises/mach_bands/terrain.js" width="1000" height="1000" >}}

This sketch is based on Daniel Shiffman terrain generation with perlin noise and Mach bands highlighting by Jean Pierre Charalambos.

This work has [applications](https://www.redblobgames.com/x/2022-voronoi-maps-tutorial/) on the videogame terrrain generation area.
87 changes: 87 additions & 0 deletions content/docs/exercises/Masking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Coloring for colorblindness
>**Prompt:** Implement a color mapping application that helps people who are color blind see the colors around them.
>
In order to assess this problem, first we need to understand colorblindness and later, pick one of the many forms this may manifest.

## What is colorblindness?
Color blindness is classified according to the number of _color channels_ that are available to a person to conveying color information into: [monochromacy](https://en.wikipedia.org/wiki/Monochromacy), [dichromacy](https://en.wikipedia.org/wiki/Dichromacy) and anomalous trichromacy. Refer to the [Ishihara test](https://en.wikipedia.org/wiki/Ishihara_test) for a color vision dificiency test.

For this particular P5.js implementation, I choose ***deuteranopia*** as my colorblindess type of interest. **Deuteranopia** is a kind of green color blindness that nullifies the person's ability to recognize the color green, thus their color spectrum goes only from blue hues to yellow ones, as shown bellow.

![undefined](https://upload.wikimedia.org/wikipedia/commons/5/5a/Rainbow_Deuteranopia.svg)

Now, diving into the P5.js implementation, I used a fairly simple mapping function as the backbone of the color mapping process. This mapping function recieves an image as input, to process the image in the three color channels first we need to load them into a array. This is achieved by the next function:


newImg.loadPixels();

Now that the pixels are loaded to an array it is possible to separate them by the color channels given the RGB order. After having the individual pixels in their respective channels we can finally map them into ***deuteranopia-friendly*** colors thanks to the next function:

// deuteranopia-friendly color transformation function
let rNew = 0.625 * r + 0.375 * g + 0.001 * b;
let gNew = 0.7 * r + 0.3 * g + 0 * b;
let bNew = 0 * r + 0.3 * g + 0.7 * b;

The specific values used in the color mapping function are derived from studies [1] on the spectral sensitivity of the deuteranopia visual system. By applying these values to the original RGB color, the resulting color is transformed to a deuteranopia-friendly color that is easier for individuals with deuteranopia to differentiate from other colors.

It's important to note that the color mapping function I provided assumes a moderate degree of deuteranopia and may not be appropriate for all cases of deuteranopia. Additionally, color perception can vary from person to person, so the resulting transformed color may not be perfectly accurate in simulating protanopia for all individuals.

After this transformation takes place the image is rebuilt with the new pixels and it´s displayed besides the original one.

Finally here's the complete P5.js implementation:

[1] Stockman, A., Sharpe, L. and Vries, R. (1999) _Color Vision: From Genes to Perception_. Cambridge University Press.

{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/1.js" width="450" height="450" >}}

## [Color modes](https://ww2.lacan.upc.edu/doc/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_color_models.htm)
>**Prompt:** Research other color models such as HSL, HSB, XYZ.
>
### HSB, and HSL Color Models

The HSL (hue, lightness, saturation) and HSB (hue, saturation, brightness) color models were developed to be more “intuitive” in manipulating with color and were designed to approximate the way humans perceive and interpret color.

Hue defines the color itself. The values for the hue axis vary from 0 to 360 beginning and ending with red and running through green, blue and all intermediary colors.

Saturation indicates the degree to which the hue differs from a neutral gray. The values run from 0, which means no color saturation, to 1, which is the fullest saturation of a given hue at a given illumination.

Intensity component - lightness (HSL) or brightness (HSB), indicates the illumination level. Both vary from 0 (black, no light) to 1 (white, full illumination). The difference between the two is that maximum saturation of hue (S=1) is at brightness B=1 (full illumination) in the HSB color model, and at lightness L=0.5 in the HSL color model.

The HSB color space is essentially a cylinder, but usually it is represented as a cone or hexagonal cone (hexcone) as shown in the Figure "HSB Solid", because the hexcone defines the subset of the HSB space with valid RGB values. The brightness B is the vertical axis, and the vertex B=0 corresponds to black color. Similarly, a color solid, or 3D-representation, of the HSL model is a double hexcone (Figure "HSB Solid") with lightness as the axis, and the vertex of the second hexcone corresponding to white.

Both color models have intensity component decoupled from the color information. The HSB color space yields a greater dynamic range of saturation.

#### HSB Solid
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/HSV.js" width="500" height="540" >}}

#### HSL Solid
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/HSL.js" width="500" height="700" >}}

### XYZ Color Model
The XYZ color space is an international standard developed by the CIE (Commission Internationale de l'Eclairage). This model is based on three hypothetical primaries, XYZ, and all visible colors can be represented by using only positive values of X, Y, and Z. The CIE XYZ primaries are hypothetical because they do not correspond to any real light wavelengths. The Y primary is intentionally defined to match closely to luminance, while X and Z primaries give color information. The main advantage of the CIE XYZ space (and any color space based on it) is that this space is completely device-independent. The chromaticity diagram in Figure ["CIE xyY Chromaticity Diagram and Color Gamut"](https://ww2.lacan.upc.edu/doc/intel/ipp/ipp_manual/IPPI/ippi_ch6/ch6_cie_diagram.htm#fig6-1) is in fact a two-dimensional projection of the CIE XYZ sub-space. Note that arbitrarily combining X, Y, and Z values within nominal ranges can easily lead to a "color" outside of the visible color spectrum.

The position of the block of RGB-representable colors in the XYZ space is shown in Figure "RGB Colors Cube in the XYZ Color Space".
#### RGB Colors Cube in the XYZ Color Space
{{< p5-iframe sketch="/showcase/sketches/exercises/coloring/XYZ.js" width="500" height="400" >}}

Use the following basic equations [Rogers85], to convert between gamma-corrected R'G'B' and CIE XYZ models:

```
X = 0.412453*R' + 0.35758 *G' + 0.180423*B'
Y = 0.212671*R' + 0.71516 *G' + 0.072169*B'
Z = 0.019334*R' + 0.119193*G' + 0.950227*B'
```

The equations for X,Y,Z calculation are given on the assumption that R',G', and B' values are normalized to the range [0..1].

```
R' = 3.240479 * X - 1.53715 * Y - 0.498535 * Z
G' = -0.969256 * X + 1.875991 * Y + 0.041556 * Z
B' = 0.055648 * X - 0.204043 * Y + 1.057311 * Z
```

The equations for R',G', and B' calculation are given on the assumption that X,Y, and Z values are in the range [0..1].

[1] Stockman, A., Sharpe, L. and Vries, R. (1999) _Color Vision: From Genes to Perception_. Cambridge University Press.

[Rog85] David Rogers. Procedural Elements for Computer Graphics. McGraw-Hill, 1985.
Loading