Sometimes we need an efficiency, low latency enviroment to run our model.
Also, most of the embedded system is written in C/C++, the C++ front can makeC++ developers directly use C++ to build their pytorch model as well.

You could reference the official installing guide to build the example.


The design of the C++ front-end follow the following rules:

  1. Closely model the Python frontend in its design:
    It means that most of the operations are similar to the python-front style.

    For example:

    • Define Module and layer in python:
    Class SimpleNet(nn.Module):
        def __init__(self):
            super(SimpleNet, self).__init__()
            self.linear = torch.nn.Linear(N, M)
           
        def forward(self, x):
            return self.linear(x)
    
    • Corresponding C++ implementation
    struct SimpleNet : torch::nn::Module {
        SimpleNet(int64_t N, int64_t M)
            : linear(register_module("linear", torch::nn::Linear(N, M))) {}
    
        torch::Tensor forward(torch::Tensor input) {
            return linear(input) + another_bias;
        }
    
        torch::Tensor forward(torch::Tensor x){
            return linear(x);
        }
    
        torch::nn::Linear linear;        
    };
    
  2. Flexibility and User-friendliness over micro-optimization:

    Usually, C++ can get more optimized efficacy but complexities in creating the program. So, PyTorch trying to let the c++ developer get more user-friendly usage.

    For example, C++ can get the parameter just similar to python :

    ## python
    net = SimpleNet(4, 5)
    print(list(net.parameters()))
    
    // C ++
    SimpleNet net(4, 5);
    for (const auto& p : net.parameters()) {
        std::cout << p << std::endl;
    }
    
    

For detail, Official cpp_frontend had explain well.


Error when building torch c++ front

  1. find_package error

    The package name passed to `find_package_handle_standard_args` (torch) 
    does not match the name of the calling package (Torch).
    

    This should be the problem of the PyTorch c++ package. Someone said that could evolve with PyTorch 1.4 and 1.5. So you may downgrade to cmake <= 3.16 and the warning should disappear.

    Reference Link

  2. cannot find -lCUDA_cublas_LIBRARY-NOTFOUND

    I did not know why CMake can not find cublas even if I set the LD_LIBRARY_PATH.

    LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:$LIBRARY_PATH
    

    Solution: mutually create the link to user library.

    sudo ln -s /usr/local/cuda-10.1/lib64/libcublas.so.10 /usr/lib/libcublas.so