You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the series. I finally got things happening, feels fantastic. I managed to get the whole thing going and thought I'd leave you some tips in case you hit any of the struggles I did.
M1 Macbooks/Mac Mini
Anyone running on a new MacBook (with the M1 chip) will find the published TensorFlow packages don't support M1, but it's coming very very very soon, (7daysish) but as a temporary workaround can use:
I managed to get a 28x28 encoder reduced down to a single value, expanding back out at an un-noticeable loss.
I couldn't do this at larger sizes such as 64x64, which worked very down to four variables
(one makes sense because there is only one variable changing: the single x+y dimension)
Splitting the autoencoder
So you can access the individual layers with autoencoder.layers so I made a little helper function to split the autoencoder by looking for an increasing number of units/nodes.
BUT you can't just loop over these layers adding them to another sequence, they don't carry the weights and things, I solved this with another helper function that creates a new dense layer with the same setup then manually copying the weights.
(it is also important that you add the new layer to the model BEFORE setting the weights)
I found that my encoding half was returning values outside of a 0-1 range using relu, so swapped to sigmoid on the final encoding layer
// ...encoding layers// these two are the final encoding layersautoencoder.add(tf.layers.dense({units: 128,activation: "relu",}));autoencoder.add(tf.layers.dense({units: 1,activation: "sigmoid",// this squeezes into range 0-1 for feeding the decoder after splitting.// there is probably better solutions for this but worked for me!}));// ...decoding layers
Generating gif
I found a really interesting behaviour by creating a gif of the possible single-dimension range (I didn't want to touch the browser so generated images and converted them to a gif)
The behaviour was that the squares didn't start small at an input of 0 and increase in size but instead the network learned a strange split where 0-0.5 was increasing as expected, and then jumped to the largest size and shrunk from 0.5-1
Generates images:
newArray(1000).fill(0).forEach(async(x,i,y)=>{constinput=[[i/y.length]]// each image is fed a value evenly distributed from 0-1constdecoderOutput=awaitdecoder.predict(tf.tensor(input)).array();awaitgenerateImage(decoderOutput[0],`./test/decoder_${i}.png`,width,height);})
Generates a gif from the open directory .png images (eg.run from inside /test folder, also notice scale value for resolution, it can upscale if desired)
Uh oh!
There was an error while loading. Please reload this page.
Kia ora!
Thanks for the series. I finally got things happening, feels fantastic. I managed to get the whole thing going and thought I'd leave you some tips in case you hit any of the struggles I did.
M1 Macbooks/Mac Mini
Anyone running on a new MacBook (with the M1 chip) will find the published TensorFlow packages don't support M1, but it's coming very very very soon, (7daysish) but as a temporary workaround can use:
(see tensorflow/tfjs#4514)
Only one encoded variable
I managed to get a 28x28 encoder reduced down to a single value, expanding back out at an un-noticeable loss.
I couldn't do this at larger sizes such as 64x64, which worked very down to four variables
(one makes sense because there is only one variable changing: the single x+y dimension)
Splitting the autoencoder
So you can access the individual layers with
autoencoder.layers
so I made a little helper function to split the autoencoder by looking for an increasing number of units/nodes.BUT you can't just loop over these layers adding them to another sequence, they don't carry the weights and things, I solved this with another helper function that creates a new dense layer with the same setup then manually copying the weights.
(it is also important that you add the new layer to the model BEFORE setting the weights)
Feeding random values
Make sure the decoder is fed with values 0-1
I found that my encoding half was returning values outside of a 0-1 range using relu, so swapped to sigmoid on the final encoding layer
Generating gif
I found a really interesting behaviour by creating a gif of the possible single-dimension range (I didn't want to touch the browser so generated images and converted them to a gif)
The behaviour was that the squares didn't start small at an input of 0 and increase in size but instead the network learned a strange split where 0-0.5 was increasing as expected, and then jumped to the largest size and shrunk from 0.5-1
Generates images:
Generates a gif from the open directory .png images (eg.run from inside /test folder, also notice scale value for resolution, it can upscale if desired)
ffmpeg -framerate 60 -pattern_type glob -i '*.png' -r 15 -vf scale=28:-1 out.gif
Finally the generateImage helper I use
The text was updated successfully, but these errors were encountered: