I'm now experimenting with sum-product networks. I will show you some obtained results.

### Predicting a half of the image

The above picture shows:

- The original images in the first row.
- The visible part of the input in the second row.
- The expected pixel intensities predicted by the network.

The shown images are from a validation set. The sum-product network was trained on different images. The training was done on 800 images from the notMNIST dataset.

The trained sum-product network is a probabilistic model. It can compute the probability of something, when given some evidence. Here, the network is asked to compute the expected intensity of each pixel. The given evidence is the right half of the image.

Some of the expected pixel intensities look bad. Some of them look good. If multiple values are possible for a pixel, the expected intensity will be a weighted average of the possibilities.

### Predicting every second column

The expected pixel intensities look better when giving every second column as the evidence. The number of pixels to predict is still one half of the image.

The better result can be explained by:

The set of all probable whole images is better pruned, when showing pixels from all parts of the image. The expected pixel intensity will be an average from fewer possibilities.

The used network structure knows about locality of pixels. Nearby image regions are connected at the bottom layers of the network.

The whole network gives high probability to some patterns. And it gives lower probability to other patterns. The top sum node has multiple possible patterns as children. The children of a sum node are product nodes. The product nodes split the pattern to smaller sub-patterns.

If two nearby sub-patterns occur together, it is easy to connect the sub-patterns by a product node. The new pattern means: sub_pattern1 AND sub_pattern2. The new pattern will occur often, if the sub-patterns occur often together. Such patterns are discovered when training the network.