OpenCV & ML (Deep Learning) 04 – Classification and Mulit-layer Perceptron (MLP)

Employing OpenCV’sMulti-layer Perceptron from Artificial Neural Network module (ANN_MLP) for classifying various classes of samples drawn from multi variant normal distribution with different man and co-variance matrices. The example for 2-dimension samples are visualized in OpenCV and for upper dimensions bench-marked against Keras (with TensorFlow backend) results.

In the previous tutorial we modeled a simple 2D periodic function successfully with MLP ( a regression problem), now it ‘s time for us to practice solving a classification one. The base class for ANN_MLP in OpenCV is called StatModel. The StatModel class contains a pure virtual function ( a function which is not meant to be implemented in that class and is marked by “=0” in function declaration ) named virtual bool isClassifier() const = 0; which makes the whole class an abstract one ( Objects could not be instantiated from abstract classes. these classes are exist to be inherited ). This method is implemented in the ANN_MLP always returning false. You may ask! Is that mean ANN is not the right tool for classification. Of course, NOT. The story of this method is that, the ANN will always return real numbers with floating point precision as an output. So, if you are suing it for classifying something, it is your responsibility to assign the outputs to the appropriate classes meanwhile in training you should address how the outputs ( as real numbers ) are relating to the classes.

For this example, we are going to classify n set of simple 2D points randomly distributed in an area. Samples for each set will be generated suing normal distribution. Function  randMVNormal() which is capable of generating multi variant normal random numbers for the given number of dimension is employed for this purpose. This function takes mean and co-variance matrices as an input. These matrices are also initialized with uniform random values. Another matrix with the name “colors” which is initialized with values from 50 to 255 will be used later for coloring the points.

void generate_samples_MVN(int dimention, int nSamples, int nClasses,
                          vector<Mat>& vSamples,
                          Mat& colors) {
    if(vSamples.size() != size_t(nClasses)) {

    // Matrix holding mean values of normal distributions
    Mat1f mean(nClasses, dimention);
    theRNG().fill(mean, RNG::UNIFORM, -200, 200);

    // Matrix holding covarience values for normal distributions
    Mat1f cov(nClasses, dimention*dimention);
    theRNG().fill(cov, RNG::UNIFORM, -30, 30);

    for(int i = 0; i < nClasses; i++) {

        Mat1f tmean = mean.row(i);
        Mat1f tcov = cov.row(i).reshape(0, 2);

        Mat samples;
        randMVNormal( tmean, tcov, nSamples, samples );
        vSamples[size_t(i)] = (samples);

    // if colors matrix does not match the samples set size reassign it
    if(colors.rows < nClasses) {
        colors = Mat::zeros(nClasses, 1, CV_64FC3);
        // Randomize with light colors on black background
        theRNG().fill(colors, RNG::UNIFORM, 50, 255);

Now that the inputs for training samples are generated we need the correspondence output for them. We could assign an arbitrary integer number for each of the classes. since OpenCV scale the input and output samples before training, we could simply assign an integer starting from zero for each set of samples. As you could find in the following example I just initialized the whole number of rows for output matrix of each set with it’s correspondence index in the vector of samples.

static Ptr<TrainData> prepare_train_data(const vector<Mat>& xSamples) {

    // Generate output sample matrices for each
    vector<Mat> ySamples(xSamples.size());
    for(size_t i = 0; i <xSamples.size(); i++) {
        ySamples[i] = Mat(xSamples[i].rows, 1, xSamples[i].type(), Scalar(i));
    Mat x, y;
    vconcat(xSamples, x);
    vconcat(ySamples, y);

    return TrainData::create(x, ROW_SAMPLE, y);

Another two useful methods implemented here are used for storing and reloading samples matrices. We are going to use these techniques for storing and loading huge matrices in the upcoming tutorials. So, I find this example a good starting point to introduce these techniques.  Hierarchical Data Format (HDF) formatted files are used to store and organize  large amounts of key-points, matrices and tensors. Each file could contain one or multiple entities of different types. The data stored could be compressed with lossless techniques to reduce the amount of disk space occupation. HDF5 is used in OpenCV for this purpose. The module provides the capability of simultaneous reads and writes with multiple instances as long as non-overlapping regions are involved. There are numerous software available to Open and Edit HDF files such as: HDFView and HDF Compass. The following functions store and load generated set of samples to and HDF file, so that we could use them later.

void save_samples_hdf(const string& filename,
                      const vector<Mat>& vSamples) {

    Ptr<hdf::HDF5> fhdf = hdf::open( filename );
    int i = 0;
    for(const auto& ms : vSamples) {
        fhdf->dswrite(ms, "dist"+to_string(i++));
    cout << "All samples are saved to " << filename << endl;

void load_samples_hdf(const string& filename,
                      vector<Mat>& vSamples,
                      Mat& colors) {
    Ptr<hdf::HDF5> fhdf = hdf::open( filename );
    int i = 0;
    string label("dist"+to_string(i));
    while(fhdf->hlexists(label)) {
        Mat s;
        fhdf->dsread(s, label);
        label = "dist"+to_string(++i);

    // if colors matrix does not match the samples set size reassign it
    if(colors.rows < int(vSamples.size())) {
        colors = Mat::zeros(int(vSamples.size()), 1, CV_64FC3);
        // Randomize with light colors on black background
        theRNG().fill(colors, RNG::UNIFORM, 50, 255);

    cout << to_string(vSamples.size())
         << " set of samples are loaded." << endl;

The plot_samples() for this example is similar to the one discussed in the previous tutorial except that it is not connecting the lines to from the curve shape of a function.

Mat plot_samples(const vector<Mat>& vSamples,
                 const Mat& colors,
                 Vec4d& bounds) {

    // enough colors for every sample
    assert(colors.rows >= int(vSamples.size()));

    vector<Ptr<plot::Plot2d>> plots;

    bounds[XMIN] = bounds[YMIN] = DBL_MAX;
    bounds[XMAX] = bounds[YMAX] = DBL_MIN;

    for(size_t i = 0; i < vSamples.size(); i++) {

        // plot module only acceptes double values
        Mat samplesd;
        vSamples[i].convertTo(samplesd, CV_64F);
        //only 2 dimensions could be plotted
        assert(samplesd.cols == 2);

        Mat xd = samplesd.col(0);
        Mat yd = samplesd.col(1);

        double xmin, xmax;
        minMaxIdx(xd, &xmin, &xmax);
        double ymin, ymax;
        minMaxIdx(yd, &ymin, &ymax);

        bounds[XMIN] = min(bounds[XMIN], xmin);
        bounds[XMAX] = max(bounds[XMAX], xmax);
        bounds[YMIN] = min(bounds[YMIN], ymin);
        bounds[YMAX] = max(bounds[YMAX], ymax);

        Ptr<plot::Plot2d> plot = plot::Plot2d::create(xd, yd);

        plot->setPlotLineColor(<Vec3d>(int(i), 0));
        plot->setPlotAxisColor(Scalar(0, 0, 0)); // Black (invisible)



    // define a margin for better visualizing the sets
    double margin = 20;
    bounds[XMIN] -= margin;
    bounds[XMAX] += margin;
    bounds[YMIN] -= margin;
    bounds[YMAX] += margin;

    //adjust border and margins of all the plots to match  together
    Mat img;

    for(auto& plt : plots) {

        Mat img_plt;

        if(img.empty()) {
            img = img_plt.clone();
        } else {
            img += img_plt;

    return img;

The following figure shows the plotted samples in OpenCV image

The following function utilizes the entire area of window as a new sample set for prediction (similar to the OpenCV classification example). This visualize the recognized borders by the MLP. The important part of the following code snippets is the way I assign the outputs to classes by simply using the cvRound() function, which rounds the real number to it’s closest integer. Most of the times this might not be challenging if you know the concept.

void plot_responses(const Ptr<ANN_MLP>& net,
                    const Vec4d& bounds,
                    Mat& img,
                    const Mat& colors,
                    int step = 20) {
    double xf = (bounds[XMAX] - bounds[XMIN])/img.cols;
    double yf = (bounds[YMAX] - bounds[YMIN])/img.rows;
    for(int c = 0; c < img.cols; c+=step) {
        for(int r = 0; r < img.rows; r+=step) {

            Mat1f pt = (Mat_<float>(1, 2) <<
                        float(xf * c + bounds[XMIN]),
                        float(yf * r + bounds[YMIN]));

            Mat res;
            net->predict(pt, res);
            int cat = cvRound(<float>(0, 0));

            if(cat >= colors.rows) {
                cerr << "out of classes ranges "
                     << pt << " => " << cat << endl;

            cv::drawMarker(img, Point(c, r),
                           MarkerTypes::MARKER_TILTED_CROSS, step);

The following figure shows the plotted clssification of OpenCV over provided samples.


The following code includes the main function of the program. You can change the number of classes to any number but if you increase the dimension value (dim variable) the plotting functions will assert. When you start the program five classes of normal distribution will be initialized. By pressing the ‘s’ key, samples will be stored to a default file with name “samples.hdf5”.  And when you press the ‘l’ key samples will be loaded to the vSamples vector variable from the same file. Pressing key ‘c’ will result in coloring the samples set with new random colors. And the ‘r’ key will run our main objective which is to create a neural network model and then train the samples and finally predict the results. Pressing any other key will result in generating new set of random samples. If you peruse the code, almost the entire part of this section is similar to what we had in the previous example.

int main(int /*argc*/, char **/*argv*/) {
    theRNG().state = static_cast<uint64>(time(NULL));

    const int nClasses = 5; // number of classes
    const int nSamples = 300; // number of samples used in each class
    const int dim = 2; // each sample dimension (plot would fail > 2)

    Mat colors; // Matrix holding colors for classes
    vector<Mat> vSamples; // Matrix holding set of samples
    Vec4d bounds; // boudry values of image in real numbers
    Mat img; // image used for drawing sets
    int sw = 0; // switch used for detecting key press

    while(true) {
        if(sw == 27) { // ESC => exit the program
        } else if( sw == 's') { // save samples matrices for later use
            save_samples_hdf("samples.hdf5", vSamples);
        } else if(sw == 'l') { // load samples matrices from a file
            load_samples_hdf("samples.hdf5", vSamples, colors);
            img = plot_samples(vSamples, colors, bounds);
        } else if(sw == 'c') { // generate new set of random colors
            theRNG().fill(colors, RNG::UNIFORM, 50, 255);
            img = plot_samples(vSamples, colors, bounds);
        }  else if( sw == 'r') { // run nueral network on these matrices

            Ptr<TrainData> tdata = prepare_train_data(vSamples);
            tdata->setTrainTestSplitRatio(0.95, true); // no test set
            //tdata->shuffleTrainTest(); // only shuffle the training set

            // Create the network model
            Ptr<ANN_MLP> net = ANN_MLP::create();

            Mat1i layerSizes = (Mat_<int>(5, 1) <<
                                tdata->getSamples().cols, // input layer
                                4, 6, 4,
                                tdata->getResponses().cols); // output layer
                                     1e4, DBL_EPSILON));
            net->setTrainMethod(ANN_MLP::RPROP, 0.001);

            cout << "Training ...";

            TickMeter t;
            cout << " " << t.getTimeSec() << " (s)" << endl;

            float rms = net->calcError(tdata, true, noArray());
            cout << "RMS: " << rms << endl;

            plot_responses(net, bounds, img, colors, 10);
            cout << "Prediction: " << t.getTimeMilli() << " (ms)" << endl;

        } else { // Generates random 2D sets of Multi Variant Normal dists
            generate_samples_MVN(dim, nSamples, nClasses, vSamples, colors);
            img = plot_samples(vSamples, colors, bounds);

        imshow("Samples", img);
        sw = waitKey();

    return 0;

You could find the Makefile and source code example for this tutorials in the following link:

Source codes of this tutorial on Github

Cite this article as: Amir Mehrafsa, "OpenCV & ML (Deep Learning) 04 – Classification and Mulit-layer Perceptron (MLP)," in MEXUAZ, October 11, 2017,

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Spelling error report

The following text will be sent to our editors: