Ai generated non-stop stream of death metal

avatar
(Edited)

CJ Carr and Zack Zukowski recently launched a YouTube channel that streams a never-ending barrage of death metal generated by AI. Their Dadabots project uses a recurrent neural network to identify patterns in the music, predict the most common elements and reproduce them.

https://dadabots.com/

Source : https://www.engadget.com/2019/04/21/ai-generated-death-metal-stream/

Geeky techical detail from their paper :

We pre-process each audio dataset into 3,200 eight second chunks of raw audio data (FLAC). The chunks are randomly shuffled and split into training, testing, and validation sets. The split is 88% training, 6% testing, 6% validation.
We use a 2-tier SampleRNN with 256 embedding size, 1024 dimensions, 5 to 9 layers, LSTM or GRU, 256 linear quantization levels, 16kHz sample rate, skip connections, and a 128 batch size, using weight normalization. The LSTM gated units have a forget gate bias initialized with a large positive 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
value of 3. The initial state h0 is either learned or randomized. We train each model for about three days on a NVIDIA K80 GPU. Intermittently at checkpoints during training, audio clips are generated one sample at a time and converted to a WAV file. Originally SampleRNN used an argmax inference method. We modified it to sample from the softmax distribution.
At each checkpoint we generate 10x 30 second clips. Early checkpoints produce generalized textures.
Later checkpoints produce riffs with sectional transitions. If after a few epochs it only produces white noise, restart the training.
Sometimes a checkpoint generates clips which always get trapped in the same riff. Listen for traps before choosing a checkpoint for longer generations.
The number of simultaneously generated clips (n_seq) doesn’t effect the processing time, because they are enerated in parallel. The number is limited by GPU memory
https://dadabots.com/nips2017/generating-black-metal-and-math-rock.pdf



0
0
0.000
21 comments
avatar

pocketsend:11@sharingcontent, play around with the token of fun - POCKET!

0
0
0.000
avatar

Successful Send of 11
Sending Account: pode
Receiving Account: sharingcontent
New sending account balance: 279970
New receiving account balance: 10
Fee: 1
Steem trxid: 0d5c858e38f628b72089448e15f798c9b8cf3bdf


pocket-logo.png

Someone sent you some POCKET tokens. POCKET is an experimental sub-token system which operates on the Steem blockchain. It's like having a custom token without SMT. You can also send some to someone else by just commenting on a post with the following command: pocketsend:number_of_token@recipient_name,memo for example to send 10 tokens to @pocketjs, make a comment starting with: pocketsend:10@pocketjs,This is a gift

I am running Pocket-JS confirmer code.

0
0
0.000
avatar

Thank you so much for using our service! You were protected from massive loss up to 20%

You just received 47.91% upvote from @onlyprofitbot courtesy of @steemium!

Want to earn more with us? Our APR can reach as high as
15% or more!

More portion of profit will be given to delegators, as the SP pool grows!

Comment below or any post with "@opb !delegate [DelegationAmount]" to find out about current APR, estimated daily earnings in SBD/STEEM

You can now also make bids by commenting "@opb !vote post [BidAmount] [SBD|STEEM]" on any post without the hassle of pasting url to memo!

* Please note you do not have to key in [] for the command to work, APR can be affected by STEEM prices
0
0
0.000
avatar

You got a 56.07% upvote from @joeparys! Thank you for your support of our services. To continue your support, please follow and delegate Steem power to @joeparys for daily steem and steem dollar payouts!

0
0
0.000
avatar

Most expensive shit post I've ever seen.. Well done, I don't know if I could continue my existence without this content..

Posted using Partiko Android

0
0
0.000
avatar

Quite good, when you take into account it was generated by a machine. Though, I noticed it was trying to sing also which sounded gibberish most of the time, if could generate only the instruments and not human voice, I supposed the model could improve from there.

It's similar to the case, of AI generated paintings, where the paintings is just representations of what data the machine learning model has already observed in the past and nothing else (in this particular case, the data can only observe the paintings of other artists). Hence why you cannot compare this to what an artist is able to achieve.

I guess, for this case is very similar, the AI is only generating what it has learnt from the other music generated by artists.

0
0
0.000
avatar

Hi, could you support post on upcoming event, Ai conf - we will be a first time speakers with Ai and blockchain topic, 27 apr:

Transparent and safe artificial intelligence https://steemit.com/blockchain/@gromozeka/graphgraila..
We also will cover federated learning, decentralized blockchain marketplaces, differential privacy, direct data governance.

0
0
0.000