Filter By “CubeEngine”

Partial-Activation Of Neural Network Based On Heat-Map Of Neural Network Activity

A device, system, and method for training or prediction of a neural network. A current value may be stored for each of a plurality of synapses or filters in the neural network. A historical metric of activity may be independently determined for each individual or group of the synapses or filters during one or more past iterations. A plurality of partial activations of the neural network may be iteratively executed. Each partial-activation iteration may activate a subset of the plurality of synapses or filters in the neural network. Each individual or group of synapses or filters may be activated in a portion of a total number of iterations proportional to the historical metric of activity independently determined for that individual or group of synapses or filters. Training or prediction of the neural network may be performed based on the plurality of partial activations of the neural network.

Partial Activation Of Multiple Pathways In Neural Networks

A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.

Partial Activation Of Multiple Pathways In Neural Networks

A device, system, and method for approximating a neural network comprising N synapses or filters. The neural network may be partially activated by iteratively executing a plurality of M partial pathways of the neural network to generate M partial outputs, wherein the M partial pathways respectively comprise M different continuous sequences of synapses or filters linking an input layer to an output layer. The M partial pathways may cumulatively span only a subset of the N synapses or filters such that a significant number of the remaining the N synapses or filters are not computed. The M partial outputs of the M partial pathways may be aggregated to generate an aggregated output approximating an output generated by fully activating the neural network by executing a single instance of all N synapses or filters of the neural network. Training or prediction of the neural network may be performed based on the aggregated output.

System And Method For Mimicking A Neural Network Without Access To The Original Training Dataset Or The Target Model

A device, system, and method is provided to mimic a pre-trained target model without access to the pre-trained target model or its original training dataset. A set of random or semi-random input data may be sent to randomly probe the pre-trained target model at a remote device. A set of corresponding output data may be received from the remote device that is generated by applying the pre-trained target model to the set of random or semi-random input data. A random probe training dataset may be generated comprising the set of random or semi-random input data and corresponding output data generated by randomly probing the pre-trained target model. A new model may be trained with the random probe training dataset so that the new model generates substantially the same corresponding output data in response to said input data to mimic the pre-trained target model.

System And Method For Compact And Efficient Sparse Neural Networks

FOR COMPACT AND EFFICIENT SPARSE NEURAL NETWORKSA device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).

System and Method for Compact and Efficient Sparse Neural Networks

A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).

Cluster-connected Neural Network

A device, system, and method is provided for training or prediction using a cluster-connected neural network. The cluster-connected neural network may be divided into a plurality of clusters of artificial neurons connected by weights or convolutional channels connected by convolutional filters. Within each cluster is a locally dense sub-network of intra-cluster weights or filters with a majority of pairs of neurons or channels connected by intra-cluster weights or filters that are co-activated together as an activation block during training or prediction. Outside each cluster is a globally sparse network of inter-cluster weights or filters with a minority of pairs of neurons or channels separated by a cluster border across different clusters connected by inter-cluster weights or filters. Training or predicting is performed using the cluster-connected neural network.

System and Method for Efficient Evolution of Deep Convolutional Neural Networks Using Filter-wise Recombination and Propagated Mutations

An efficient technique of machine learning is provided for training a plurality of convolutional neural networks (CNNs) with increased speed and accuracy using a genetic evolutionary model. A plurality of artificial chromosomes may be stored representing weights of artificial neuron connections of the plurality of respective CNNs. A plurality of pairs of the chromosomes may be recombined to generate, for each pair, a new chromosome (with a different set of weights than in either chromosome of the pair) by selecting entire filters as inseparable groups of a plurality of weights from each of the pair of chromosomes (e.g., “filter-by-filter” recombination). A plurality of weights of each of the new or original plurality of chromosomes may be mutated by propagating recursive error corrections incrementally throughout the CNN. A small random sampling of weights may optionally be further mutated to zero, random values, or a sum of current and random values.

System And Method For Compact And Efficient Sparse Neural Networks

A device, system, and method is provided for storing a sparse neural network. A plurality of weights of the sparse neural network may be obtained. Each weight may represent a unique connection between a pair of a plurality of artificial neurons in different layers of a plurality of neuron layers. A minority of pairs of neurons in adjacent neuron layers are connected in the sparse neural network. Each of the plurality of weights of the sparse neural network may be stored with an association to a unique index. The unique index may uniquely identify a pair of artificial neurons that have a connection represented by the weight. Only non-zero weights may be stored that represent connections between pairs of neurons (and zero weights may not be stored that represent no connections between pairs of neurons).