As far as I know, no one has tried this, due to the way the network is structured. **Each input has a set of weights, that are connected to more inputs. If the inputs switch, the output will too**.

However, you can build a network that approaches this behaviour. In your training set, use batch learning and for each training sample, **give all possible permutations to the network such that it learns to be permutation invariant. This will never be exactly invariant, it just might be close**.

Another way to do this is to have the weights replicated for all inputs. For example, lets assume you have 3 inputs (i0, i1, i2), and the next hidden layer has 2 nodes (hl0, hl1) and activation function F. Assuming a fully connected layer, you have 2 weights w0 and w1. The hidden layer's nodes hl0 and hl1 are given, respectively, by

hl0 = F(i0w0+i1w0+i2w0)

hl1 = F(i0w1+i1w1+i2w1)

Thus giving you a hidden layer whose values are permutation invariant from the input. From now on, you can learn and build the rest of the network as you see fit. This is an approach derived from convolutional layers.

Off-topic, this seems like a cool project. If you want to cooperate on some research project, contact me (check my profile)