We review the literature on randomized and sparsely connected neural networks and introduce a novel framework in which each neuron in the hidden layers receives a fixed number of random inputs from any preceding layer while the output layer remains fully connected. The proposed methodology is described in detail, including the construction of a directed acyclic graph (DAG) to represent the network, the training procedure on the MNIST dataset, and extensive parameter exploration. Our preliminary results indicate that increasing the number of inputs per neuron yields a higher return on parameter investment compared to increasing layer or neuron counts. Future work will investigate these findings further. The complete source code and CSV results are included as supplementary material.