Our architecture renders the networks to learn wide horizontallyto explore the feature space admitting stochasticity of the deepnets, rendering a mixture-of-experts style field. Unlike theconventional multicommittee systems that extract the trivial 1-D winner-take-all regions, that is, the top part of the hierarchy network architecture.becomes a standard multilayer perceptron [9], we feed theintermediate representation into a multispectral graph Laplacianto explore the complementary property of intermediate representationswherein the distribution of each view is sufficientlysmooth.