12/21/2015

Use Xtion on ROS indigo

Environment: ROS indigo and Ubuntu 14.04

Note:
If using usb 3.0 or above, should turn it down in BIOS(Advance->USB->xhci) disable xhci

 Install dependencies:
$ sudo apt-get install ros-indigo-rgbd-launch ros-indigo-openni2-camera ros-indigo-openni2-launch

Install package rqt and useful plugins: sudo apt-get install ros-indigo-rqt ros-indigo-rqt-common-plugins ros-indigo-rqt-robot-plugins

 Open Terminal 1
$ roscore

 Open Terminal 2: launch openni2.launch
$ roslaunch openni2_launch openni2.launch

 Open Terminal 3: Open RVIZ to visualize
$ rosrun rviz rviz
Show the depth image by adding its topic


Finally, show IR, Depth, RGB image, and point cloud by adding all of their topics in Rviz:



You can also try my Gist:
https://gist.github.com/tzutalin/175776fe02a9496a7778

11/11/2015

Object detection and recognition on mobile device (Android)



Object detection and recognition on mobile device


   Few months ago, I had implemented Object detection algorithm (FastRCNN and FasterRCNN) on Android arm 

and x86 platform. However, there are some performance issue needed to resolve. For example, it takes a long to 

recognize all bounding boxes and use too much memory on  mobile platforms. I have tried openCL on my 

Android phone, but it won't be better. So I am going to remove some layers of Convolution Neural Network to 

improve speed and memory, but I am not sure of how much accuracy will drop down. I am also 

going to try pruning methods and other BLAS like MKL to speed up forwarding on mobile device.  



Image recognition:



Object detection on HTC Desire Eye 













Source:
https://github.com/tzutalin/Android-Object-Detection

10/22/2015

C++ Boost lib to scan directory

library(-l) : boost_system boost_filesystem 
Include(-I) : /usr/include/boost
#include <boost/filesystem.hpp>

namespace fs = boost::filesystem;

std::string dirPath = "/home/darrenl/Pictures/person";
fs::path someDir(dirPath);
fs::directory_iterator end_iter;

typedef std::multimap<std::time_t, fs::path> result_set_t;
result_set_t result_set;

if (fs::exists(someDir) && fs::is_directory(someDir)) {
    for (fs::directory_iterator dir_iter(someDir); dir_iter != end_iter;
            ++dir_iter) {
        if (fs::is_regular_file(dir_iter->status())) {
            result_set.insert(
                    result_set_t::value_type(
                            fs::last_write_time(dir_iter->path()),
                            *dir_iter));
            LOG(INFO)<< dir_iter->path().string();

        }
    }
}

10/15/2015

Ubuntu Quick install Apache, PHP, and create public_html


Install Apache and MySQL

sudo apt-get install apache2 mysql-client mysql-server php5-mysql

Visiting your server in your web browser
















Create a public_html folder by User

sudo a2enmod userdir
sudo service apache2 reload
Of course, you'll also need to make sure that the permissions on your public_html folder allow the www-data user to see the files in there -- 755 usually works well. To do this:
mkdir ~/public_html
chmod -R 755 ~/public_html
So that, Apache can access files in Home directory

Install PHP 

sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt

Test PHP

<?php
phpinfo();
?>
The address you want to visit will be:
http://localhost/info.php















Enable PHP in UserDir
sudo vim /etc/apache2/mods-enabled/php5.conf






sudo /etc/init.d/apache2 restart

9/18/2015

Develop a GUI tool to label and annotate image

labelImg Introduction

  In the past few month, I started to do a project about objection detection. There is a great website, called Image-Net, that I can download images from it to do objection recognition. However, Image-Net provide the users with a few images with bounding box(annotation file). Therefore, I tried to spend a few days to develop a GUI tools that can annotate the image in PascalVOC and Image-Net annotation format. The bellow screenshot is my GUI tool developed by pyQT and forking from labelMe.


A tutorial demonstrates how to use it. There are some hotkeys that can annotate and save the image faster.
Hotkeys:
' Ctrl+ N ' :  Create a bounding box
' n ' : Change to the next image
'Ctrl + S' : Save the annotated file

Other ImageNet Utils
  I also created some tools that can easily download the image, crop the image with bounding box, and convert the image's path and label to text file, etc. Please feel tree to knock yourself out. 

For example, one of the tools 

It will create train.txt containing 3,000 paths and label 1 for training and test.txt contain 1,000 paths and label 1 for testing, and label 1 is chair.
./labelcreator.py --size_of_train 3000 --size_of_test 1000 --label 1 --dir ./chair

The result format is as bellow:

Conclusion


  Hope these tools can help the people who are doing vision algorithms or research.

If the tool helps you, please give me a star in Github.

Source code:




8/31/2015

Setup Eclipse for C++ 11


Eclipse version

Setting up the compiler is fairly straightforward: 
1. Right click your project and click Properties
2. Under C/C++ Build click Settings
3. Under GCC C++ Compiler, click Miscellaneous. In the Other Flags box, append "-std=c++11" to the list of tokens.


C++11 includes and C++ indexing: 
1. Right click your project and click Properties
2. Under C/C++ General click "Preprocessor Include Paths, Macros"
3. Select the Providers tab
4. Select "CDT GCC Built-in Compiler Settings"
5. Uncheck the "Use global provider shared between projects" option
6. Under the list there's an box that says "Command to get compiler specs." Append "-std=c++11"   to this.


7. Move the "GCC Built in Compiler Settings" provider at the top of the list using the 'Move Up' button on the right. Click Apply and then OK
8. Go to Properties" C/C++ General -> Paths and Symbols -> Symbols -> GNU C++
9. Add __GXX_EXPERIMENTAL_CXX0X__ into "Name" and leave "Value" blank

10. Back in your Eclipse workspace, you project will start indexing. If not, select the Project Menu, C/C++ Index, and click "Re-resolve unresolved includes."


Note:

Setting up **__GXX_EXPERIMENTAL_CXX0X__** does not help 
To fix C++11 syntax highlighting go to:
Project Properties --> C/C++ General --> Paths and Symbols --> Symbols --> GNU C++
and overwrite the symbol (i.e. add new symbol):

__cplusplus
with value
201103L
Besides, you can try to enable indexer to scan all files: Window -> Preferences -> C/C++ -> Indexer

8/09/2015

Android-NDK Eclipse setup


1. Download Eclipse for C/C++ or Eclipse for Jave but you need to download CDT

Eclipse for C/C++


2. Install Android Developer Tools (ADT)

Choose Help->Install New Software from the main menu. 


3. Download Native Development Kit (NDK)

Download NDK and extract it.

Go to Window->Preference->Android->NDK  
Locate your NDK Path




















4. Try to import a NDK project containing Android.mk amd Applicaiton.mk already
$ git clone https://github.com/julienr/libpng-android.git

Open Eclipse and import C/C++ Project (Select C/++ -> Existing Code as Makefile Project)























Give a project name,  locate code path, select Android GCC tool chain.































You can check your project's properties. You build command should be 'ndk-build'. IDE should include android-ndk's headers in C/C++ -> General Path and Symbols.



























5. Start to build it 

Choose Project->Build Project from the main menu.

If your project is executable, you push and test it on your Android device.

$ adb push [/local/path/binary] /data/local/tmp

$ adb shell ./data/local/tmp/[binary]

8/03/2015

Neural Network's common ways to improve generalization and reduce overfitting


1. Data augmentation

 It is the easiest and most common way to reduce overfitting in Machine learning. For example, for images, you can generate data by translating, flipping the images on the training set. For another example, you can augment your data with PCA variables and feature selection.

2. Regulation 

 Add regulation term L1 or L2, and weight decay to loss function in order to penalize certain parameter configurations.

3. Early stopping 

  It's a strategy to stop training before the learner begins to over-fit. Simply stated, "Early stopping" stops training the learner when the error on the validation set is increasing instead of decreasing.

Reference:

4. Dropout 

  Dropout works completely on the level of the activation functions by setting the neuron randomly to 0 with a probability of 0.5. In some research,  the researchers tried to use dropout, and they found that dropout helps prevent overfitting to a large extent in terms of long-term performance during training—the decrease of validation accuracy due to overfitting is much smaller than networks without dropout. 

Reference:
[Srivastavaetal.2014N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15: 929-1958, 2014

7/19/2015

Setup Jetson TK1

About Jetson K1
  Jetson TK1 is NVIDIA's embedded Linux development platform featuring a Tegra K1 SOC (CPU+GPU+ISP in a single chip).


Flashing Jetson TK1 with the latest OS(Linux For Tegra) images

Step1 : Download Jetson TK1 Development Pack (JetPack TK1) from the bellow link:

Step2 : Execute and install JetPack on your PC that is running Linux. It will try to flash image and install some components.
$ chmod +x JetPackTK1-1.2-cuda6.5-linux-x64.run
$ ./JetPackTK1-1.2-cuda6.5-linux-x64.run


















Step 3 : When flashing the device, need to let Jetson to enter to recover mode. How to enter Recover mode:

(1) Turn off Jetson TK1 and connect the Micro-USB plug on the USB cable.

(2) Press "Force Recovery" Button, press "POWER" Button, release POWER Button and release Force Recovery button.

(3) In recovery mode, you cannot login to Jetson TK1. If Jetson TK1 in recovery Mode and it is connected to host PC, "lsusb" command lists it with ID 0955:7140 in host PC.
$lsusb 
Bus 003 Device 006: ID 0955:7140 NVidia Corp

Step4 : After flashing device successfully, enter IP of Jetson, username, and password.
















IP Address : Get the IP from Jetson board via $ifconfig
username: ubuntu
password: ubuntu

Run CUDA samples installed from Jetpack on Jetson

 After installing successfully, reboot Jetson. If you know the IP of Jetson board, you could also connect it via remote access.
$ /home/ubuntu/GameWorksOpencGLSamples/samples/bin/linux-arm32
















6/30/2015

BLAS - ATLAS openBLAS and MKL installation on Ubuntu

BLAS (Basic Linear Algebra Subprograms) 

It is a specification that prescribes a set of low-level routines for performing common linear algebra operations.

There are a lot of implements of BLAS:

  • Accelerate : Apple's framework for Mac OS X and iOS, which includes tuned versions of BLAS
  • ACML : The AMD Core Math Library, supporting the AMD Athlon and Opteron CPUs under Linux and Windows.
  • ATLAS : Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran.
  • BLIS : BLAS-like Library Instantiation Software framework for rapid instantiation.
  • cuBLAS : Optimized BLAS for NVIDIA based GPU cards.
  • clBLAS : An OpenCL implemenation of BLAS.
  • Intel MKL : The Intel Math Kernel Library, supporting x86 32-bits and 64-bits. Includes optimizations for Intel Pentium, Core and Intel   Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and Mac OS X.
  • etc..

MKL installation 

Step1: Go to the bellow links and register for Parallel Studio XE Cluster Edition. You will receive an email containing install and download instructions.
https://software.intel.com/en-us/intel-education-offerings

Step2: 

  $ tar zxvf parallel_studio_xe_2015_update3.tgz
  $ chmod a+x parallel_studio_xe_2015_update3 -R
  $ cd parallel_studio_xe_2015_update3
  $ sudo ./install_GUI.sh
And then enter the serial keys as follow figure

Step3:
Extend default lib search path for MKL in ubuntu. Do not need to export LD_LIBRARY_PATH
Create intel_mkl.conf, and edit it.
  $ cd /etc/ld.so.conf.d
  $ sudo vi intel_mkl.conf
Paste the bellows:
  /opt/intel/lib/intel64
  /opt/intel/mkl/lib/intel64
  $ sudo ldconfig -v

The faster way to setup MKL:
Download MKL libraray from 
https://drive.google.com/file/d/0Bz2RXKLpvFgITU16YUs1TmNGcDA/view?usp=sharing
$ tar -xf intel_mkl.tar
$ sudo mv intel_mkl /opt/intel 
$ export LD_LIBRARY_PATH=/opt/intel/mkl/lib/intel64:/opt/intel/lib/intel64:$LD_LIBRARY_PATH

ATLAS installation

  $ sudo apt-get install libatlas-base-dev


OpenBLAS installation

  $ sudo apt-get install libopenblas-dev





6/15/2015

C/C++ gflags and glog

Install gflags and glog


$ sudo apt-get install libgflags-dev libgoogle-glog-dev

gflag Source code on GitHub: https://github.com/gflags/gflags
glog Source code on GitHub: https://github.com/google/glog

gflag sample code

===============================================
// Test.cpp
#include <iostream>
#include <gflags/gflags.h> // #include <google/gflags.h>
using namespace std;

DEFINE_bool(big_menu, true, "Include 'advanced' options in the menu listing");
DEFINE_string(languages, "english,french,german", "comma-separated list of languages to offer in the 'lang' menu");


int main(int argc, char** agrv)
{
    ::google::ParseCommandLineFlags(&argc, &agrv, true);
    std::cout<< "FLAGS_big_menu : " << FLAGS_big_menu << "\n";
    return 0;
}

$ g++ -o test Test.cpp -I /usr/include/gflags -L /usr/lib/x86_64-linux-gnu -lgflags
$ ./Test  --big_menu=false
Output:
FLAGS_big_menu : 0

gflag Defining Flags In Program

Defining a flag is easy: just use the appropriate macro for the type you want the flag to be
DEFINE_bool(big_menu, true, "Include 'advanced' options in the menu listing");
DEFINE_string(languages, "english,french,german",
                 "comma-separated list of languages to offer in the 'lang' menu");
DEFINE_bool defines a boolean flag. Here are the types supported:
DEFINE_bool: boolean
DEFINE_int32: 32-bit integer
DEFINE_int64: 64-bit integer
DEFINE_uint64: unsigned 64-bit integer
DEFINE_double: double
DEFINE_string: C++ string

gflag Setting Flags on the Command Line

app_containing_foo --languages="chinese,japanese,korean"
app_containing_foo -languages="chinese,japanese,korean"
app_containing_foo --languages "chinese,japanese,korean"
app_containing_foo -languages "chinese,japanese,korean"
For boolean flags, the possibilities are slightly different:
app_containing_foo --big_menu
app_containing_foo --nobig_menu
app_containing_foo --big_menu=true
app_containing_foo --big_menu=false

glog sample code

=============================================
#include <glog/logging.h>
int main(int argc, char** argv) {
    FLAGS_alsologtostderr = 1; // It will dump to console
    google::InitGoogleLogging("test");


    LOG(INFO) << "Dump log test";
    return 0;
}
Output:
I0825 14:45:06.432374 22528 objectdection.cpp:15] Dump log test

Miniglog

If you would like to use glog for cross-platform like Android, you can use miniglog as bellows because glog didn't support Android NDK.

https://github.com/tzutalin/miniglog

Ref:

https://google-gflags.googlecode.com/svn/trunk/doc/gflags.html#intro

6/10/2015

Integrate Caffe into ROS

Purpose:

Integrate Caffe into ROS to do image classification.

Please go to my Github, there are more details:



6/03/2015

Setup Caffe from scratch

Introduction

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors.

Installation on Ubuntu 12.04 / 14.04

Use the script to install Caffe's requirements 
$ git clone https://gist.github.com/tzutalin/b24937905a2480da1723 
$ sh b24937905a2480da1723/installCaffeDep.sh

If you would like to install Caffe's requirement manually or install CUDA depedencies, please refer to http://caffe.berkeleyvision.org/install_apt.html

Get Caffe source and compile it

Clone the source
$ git clone https://github.com/BVLC/caffe.git

After cloning, confing Makefile.config for Makefile
$ cp Makefile.config.example Makefile.config
$ vi Makefile.config
For CPU-only Caffe, uncomment CPU_ONLY := 1 in Makefile.config.

Start compiling
$ make -j 8 all ; make -j 8 test ; make -j 8 runtest ; make pycaffe ; make distribute 
My Makefile.config's screenshot is as bellow. I uncomment CPU_ONLY := 1



Run and test Caffe

Test whether caffe can run or not, use benchmarking cmd as bellos 
$ cd caffe 
$ build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt

Run Caffe python
The search path can be manipulated from within a Python program as the variable sys.path.
$ echo PYTHONPATH=[/to/your/path]/caffe/python/:$PYTHONPATH > ~/.bashrc

Try to run classifiy_test.py
$ cd [/to/your/path]/caffe/example 
$ git clone https://gist.github.com/912d1774d96266c4e76b.git 
$ python classify_test.py

If using default python(Python 2.7.6), might need the following dependencies 
$ sudo apt-get install python-scipy python-skimage libqt4-core libqt4-gui libqt4-dev libzmq-dev ; sudo pip install -U scikit-image; sudo pip install pyzmq; sudo pip install protobuf; sudo pip install pygments 


References

Install Intel MKL and other BLAS for Caffe
http://tzutalin.blogspot.tw/2015/06/blas-atlas-openblas-and-mkl.html
Using Caffe with Eclipse
http://tzutalin.blogspot.tw/2015/05/caffe-on-ubuntu-eclipse-cc.html
Manual Install Caffe
http://caffe.berkeleyvision.org/install_apt.html
Instant way to use Caffe
http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb

5/31/2015

Setup Opengrok and tomcat on Ubuntu

To install the tomcat 7 you need to run:

  $ sudo apt-get install default-jdk tomcat7
  $ cd /usr/share/tomcat7/bin/
  $ ./catalina.sh

Then edit your ~/.bashrc and include using the directory pointed by CATALINA_BASE include the following var:

  export CATALINA_HOME=/usr/share/tomcat7/
  export OPENGROK_TOMCAT_BASE=$CATALINA_HOME
Run the tomcat and check if it is working:

$ . /etc/init.d/tomcat7
Open web address: http://localhost:8080/
Result on your browser:

Install OpenGrok
First, you need to install the exuberant C tags

$ sudo apt-get install exuberant-ctags openjdk-7-jdk
Download Opengrok binary from the bellow link:
http://opengrok.github.io/OpenGrok/
After download and extract it (E.g. I put it under /home/darrenl/tools/opengrok-0.12.5):

$ tar -zxvf opengrok-0.12.1.5.tar.gz
Second, create a folder and soft link to your code in order to generate index
  • $ cd ~ ; mkdir local_src ; cd local_src/ ; mkdir data src ; cd src/ $ ln -s ~/code/base/mir/ mir
My case (Create soft links from ~/development/android_core, ~/development/caffe, etc..):
  • $ ln -s ~/development/android_core ~/local_src/src/ $ ln -s ~/development/caffe ~/local_src/src/


Third, edit your ~/.bashrc again and paste the bellow text to add the path of OpenGrok binary to path environment var.
  • export OPENGROK_INSTANCE_BASE=~/local_src
  • export PATH=$PATH:~/tools/opengrok-0.12.1.5/bin

In the end, using OpenGrok to deploying, index the source in ~/local/src/
  • $ source ~/.bashrc $ cd ~/tools/opengrok-0.12.1.5/bin; sudo ./OpenGrok deploy $ OpenGrok index You can ingore some files by setting IGNORE_PATTERNS $ IGNORE_PATTERNS="-i *.git -i *.so -i *.apk -i d:.git" OpenGrok index
Run it on browser:
http://localhost:8080/source/

Besides, co-workers can see the code accessing the following addresses: <IP>:<PORT>/source or <hostname>:<PORT>/source

Update index

Whenever you create a link or source files in ~/local_src, you need to call $ OpenGrok index to update. And then refresh your browser screen again (F5).

Run $OpenGrok index on start up:

    $ cd ~/.config/autostart; vi opengrok.desktop
Paste the bellow content to opengrok.desktop and save it. Then, the computer will execute $ OpenGrok index when booting up.

[Desktop Entry]
Name=OpenGrok
GenericName=A fast and usable source code search and cross reference engine
Comment=A fast and usable source code search and cross reference engine
Exec=OpenGrok index
Terminal=false
Type=Application
StartupNotify=false


Note: If your OS is based on Linux, I strongly recommend that you should use Opengrok + Docker.
Please refer to https://hub.docker.com/r/tzutalin/opengrok/