Neural networks are one of the most dominant forms of AI algorithms being used today. They seem to be the right solution to a myriad of problems and are often considered to provide objective answers to a variety of complex questions, but why? A neural network can be tuned to cope with a wide range of situations, which is great, but is it always correct to do so? We will present a range of manipulations to a predefined neural network to show their effects on the results it can produce. Examples are the use of different training sets and variations in configuration, but also changing the neural network during runs, which could affect it in certain and uncertain ways. We will explain why this is not necessarily a fault of the network or even a bad thing in general, but that it does require some careful thinking when working with neural networks. The definition of insanity is doing the same thing over and over again and expecting different results, but how about the opposite? All over the world, people working on neural networks are taking different approaches but are expecting the same results. Is that not insanity?
Having turned his hobby into his job, Cédric is a driven developer at JCore. While currently being focussed on Java and JavaScript, he is always keeping an eye out for any other interesting technologies that cross his path.
Chiel is a passionate back-end developer with a background in Human Movement Sciences, where he developed a profound interest in neuroscience, data processing techniques and Big Data. He is curious about functional programming and enjoys learning the fundamentals in Scala and Kotlin.