I couldn't find anything posted on here about this, so I thought I should share.
For those of you that don't know, Google's DeepDream system is the image-recognition AI that they're working on, based on taught neural networks. As such, it specialises in recognising what it's been taught, although it could (if it were human) be called monomaniacal: it sees everything within the context of what it's been taught. Give it something it doesn't understand the shape of, and shit gets weird, fast.
Anyway - someone's gone and released a web app that gives a decent approximation of what DeepDream does to images that it's been fed yet failed to understand.
Behold:
The app can be found here, and the filter that produces the most worthwhile results is the 'trippy' one.
For those of you that don't know, Google's DeepDream system is the image-recognition AI that they're working on, based on taught neural networks. As such, it specialises in recognising what it's been taught, although it could (if it were human) be called monomaniacal: it sees everything within the context of what it's been taught. Give it something it doesn't understand the shape of, and shit gets weird, fast.
Anyway - someone's gone and released a web app that gives a decent approximation of what DeepDream does to images that it's been fed yet failed to understand.
Behold:
The app can be found here, and the filter that produces the most worthwhile results is the 'trippy' one.