Gaussian Kernel Bandwidth Optimization with Matlab Code

In this article, I write on “Optimization of Gaussian Kernel Bandwidth” with Matlab Code.

First, I will briefly explain a methodology to optimize bandwidth values of Gaussian Kernel for regression problems. In other words, I will explain about “Cross validation Method.”

Then, I will share my Matlab code which optimizes the bandwidths of Gaussian Kernel for Gaussian Kernel Regression. For the theory and source code of the regression, read my previous posts <link for 1D input>, <link for multidimensional input>. This Matlab code can optimize bandwidths for multidimensional inputs. If you know the theory of cross validation, or if you don’t need to know the algorithm of my program, just download the zip file from the below link, then execute demo programs. Probably, you can use the program without big difficulties.

1. Bandwidth optimization by a cross validation method

The most common way to optimize a regression parameter is to use a cross validation method. If you want to know about the cross validation deeply, I want to recommend to read this article. Here I will shortly explain about the cross validation method that I am using. This is just a way of cross validation.

1. Randomly sample 75% of the data set, and put into the training data set, and put the remaining part into the test set.

2. Using the training data set, build a regression model. Based on the model, predict the outputs of the test set.

3. Compare between the predicted output, and the actual output. Then, find the best model (best bandwidth) to minimize the gap (e.g, RMSE) between the predicted and actual outputs.

2. Matlab code for the algorithm

This program is for multidimensional inputs (of course, 1D is also OK). The most important function is Opt_Hyp_Gauss_Ker_Reg( h0,x,y ) and it requires Matlab optimization toolbox. I am attaching two demo programs and their results. I made these demo programs as much as I can. So, I believe that everybody can understand.

<Demo 2D>

-Mok-

—————————————————————————————————

I am Youngmok Yun, and writing about robotics theories and my research.

My main site is http://youngmok.com, and Korean ver. is  http://yunyoungmok.tistory.com.

—————————————————————————————————

Gaussian Kernel Regression for Multidimensional Feature with Matlab code (Gaussian Kernel or RBF Smoother)

I am sharing a Matlab code for Gaussian Kernel Regression algorithm for multidimensional input (feature).

In the previous post (link), I posted a theory of Gaussian Kernel Regression, and shared a Matlab code for one dimensional input. If you want to know about the theory, read the previous post. In the previous post, many visitors asked me for a multidimensional input version. Finally I made a Gaussian Kernel Regression Program for a general dimensional input

I wrote a demo program to show how to use the code as easy as possible.

The below is the demo program, and a demo result plot. In this demo program, the dimension of input is 2 because of visualization, but it is expendable to an arbitrary dimension.

For the optimization of kernel bandwidth, see my other article <Link>.

I wish this program can save your time and effort for your work.

—————————————————————————————————————————–

I am Youngmok Yun, and writing about robotics theories and my research.

My main site is http://youngmok.com, and Korean ver. is  http://yunyoungmok.tistory.com.

—————————————————————————————————————————–

Monte Carlo Integration with a simple example

How can we do the “Integration”?

In many cases, the integration is not easy in an analytical method.

The Monte Carlo Integration method is a numerical integration method.

Let’s think about the below example.

The goal of this integration is to find the area of pink region.

The key idea of the Monte Carlo integration is to find $\hat{f}}$ to represent $f$. See below.

Then, How can we find the $\hat{f}}$ ? the Monte Carlo Integration method uses “Expectation method” (Average) See below

With the random sampling method, we can get the $\hat{f}$ by calculating the mean value.

This is a very useful way especially for the calculation of Bayesian posterior.

The below is an example of Monte Carlo Integration.

I will solve this problem  $\int^2_{-1}x&space;dx$

>> N=10000;
>> 3*sum(rand(N,1)*3-1)/N

ans =

1.5202

Here $\hat{f}$ is < sum(rand(N,1)*3-1)/N > and the range of the integration is 3.

Then, good luck

For more detail, I recommend to read the below article.

http://web.mit.edu/~wingated/www/introductions/mcmc-gibbs-intro.pdf

—————————————————————————————————————————–

I am Youngmok Yun, and writing about robotics theories and my research.

My main site is http://youngmok.com, and Korean ver. is  http://yunyoungmok.tistory.com.

—————————————————————————————————————————–

Gaussian kernel regression with Matlab code

In this article, I will explain Gaussian Kernel Regression (or Gaussian Kernel Smoother, or Gaussian Kernel-based linear regression, RBF kernel regression)  algorithm. Plus I will share my Matlab code for this algorithm.

You can see how to use this function from the below. It is super easy.

From here, I will explain the theory.

Basically, this algorithm is a kernel based linear smoother algorithm and just the kernel is the Gaussian kernel. With this smoothing method, we can find a nonlinear regression function.

The linear smoother is expressed with the below equation

$y^*&space;=&space;\frac{\sum^N_{i=1}K(x^*,x_i)y_i}{\sum^N_{i=1}K(x^*,x_i)}$

here x_i is the i_th training data input, y_i is the i_th training data output, K is a kernel function. x^* is a query point, y^* is the predicted output.

In this algorithm, we use the Gaussian Kernel which is expressed with the below equation. Another name of this functions is Radial Basis Function (RBF) because it is not exactly same with the Gaussian function.

$K(x^*,x_i)=\exp\left(-\frac{&space;(x^*-x_i)^2}{2b^2}\right)$

With these equation, we can smooth the training data outputs, thus we can find a regression function.

For the optimization of kernel bandwidth, see my other article <Link>.

Then good luck.

-Mok-

—————————————————————————————————————————–

I am Youngmok Yun, and writing about robotics theories and my research.

My main site is http://youngmok.com, and Korean ver. is  http://yunyoungmok.tistory.com.

—————————————————————————————————————————–

Hill Type Muscle Model with Matlab Code

In this post, I will write on the Hill type muscle model, and then, I will provide a Matlab code made for the model.

If you are familiar with Biomechanics, I think the best source to study a muscle tendon model is Zajac’s paper [4].

OK let’s go to the main writing.

Generally, the muscle tendon unit is modeled by the below figure including a CE (controctile component element), PE (parallel elastic element), and SE (series elastic element). This is the Hill type muscle-tendon model. The CE and PE are elements for a muscle, and the SE is for a tendon. Sometimes the SE is ignored in modeling, depending on tendon type, because its stiffness is very high.

hill type muscle model [1]

The CE generates a contracting force on tendon, and the size of force is a function of the velocity/length of muscle as shown in the below equations. This is a general model, but sometimes the other model is used, depending on the type of muscle, if you want to know the detail refer [3].

Force function of CE [2]

The PE element is modeled by the below equation. Again SE can be ignored, thus in my Matlab code, it is ignored.

Force of PE and SE [5]

Finally, we can make a model based on the above equations. In my code, I made a muscle-tendon model for finger muscles. If you want to model for the other muscle tendon unit, you have to search several parameters for the muscle.

[1]
E. M. Arnold, S. R. Ward, R. L. Lieber, and S. L. Delp, “A Model of the Lower Limb for Analysis of Human Movement,” Ann Biomed Eng, vol. 38, no. 2, pp. 269–279, Feb. 2010.
[2]
J. Rosen, M. B. Fuchs, and M. Arcan, “Performances of Hill-Type and Neural Network Muscle Models—Toward a Myosignal-Based Exoskeleton,” Computers and Biomedical Research, vol. 32, no. 5, pp. 415–439, Oct. 1999.
[3]
J. L. Sancho-Bru, A. Pérez-González, M. C. Mora, B. E. León, M. Vergara, J. L. Iserte, P. J. Rodríguez-Cervantes, and A. Morales, “Towards a realistic and self-contained biomechanical model of the hand,” 2011.
[4]
Z. Fe, “Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control.,” Crit Rev Biomed Eng, vol. 17, no. 4, pp. 359–411, Dec. 1988.
[5]
P.-H. Kuo and A. D. Deshpande, “Contribution of passive properties of muscle-tendon units to the metacarpophalangeal joint torque of the index finger,” in Biomedical Robotics and Biomechatronics (BioRob), 2010 3rd IEEE RAS and EMBS International Conference on, 2010, pp. 288–294.

—————————————————————————————————————————–

I am Youngmok Yun, and writing about robotics theories and my research.

My main site is http://youngmok.com, and Korean ver. is  http://yunyoungmok.tistory.com.

—————————————————————————————————————————–

Sliding Mode Control (SMC), Robust control algorithm for nonlinear system

Sliding Mode Control algorithm is a robust controller for nonlinear systems.

Robust control means that even though the system model has a certain error, if the controller can control the system, we say that the controller is robust.

SMC has two fundamental ideas.

1. To attract the system states to the surface.

2. To make the state slide on the surface toward the origin.

To explain the above two ideas more easily, let’s assume a typical control problem.

$\dot&space;x&space;=&space;f_1&space;(x,\dot&space;x)$   — (1)

$\ddot&space;x&space;=&space;f_2&space;(x,&space;\dot&space;x)&space;+&space;u$

$y=Cx$ — (2) if it is nonlinear, you need to know Lie Derivatives.

To achieve the first idea, we need to define a surface like the below. (Let’s assume that $y$ is a scalar and differentiable.)

$s(x)=a_0&space;e&space;+&space;a_1&space;\dot{e}$  where  $e=y-y_d$ — (3)

Here we need to select a_0 and a_1 to make y->0 as t-> inf,s=0. You will see the reason after more several lines.

Let’s take the derivative of s(x) w.r.t. x. Then we can get the below equation.

$\dot{s}(x)=a_0&space;\dot&space;e&space;+&space;a_1&space;\ddot&space;e$  — (4)

In addition, if we add the additional term -$\eta&space;\text{sign}(s)$ we can obtain the below equation,

$\dot{s}(x)=a_0&space;\dot&space;e&space;+&space;a_1&space;\ddot&space;e=-\eta&space;\text{sign}(s)$ — (5)

Through a simple Lyapunov theorem, we can easily prove s-> 0 at t->inf .

if s=0, e-> 0 and \dot(e)-> 0, because we selected a_0 and a_1 to be like this.

Let’s look at (5),

$a_0&space;\frac{d}{dt}(y-y_d)+a_1&space;\frac{d^2}{dt^2}(y-y_d)=-\eta&space;\text{sign}(s)$

=> $C(a_0&space;\dot&space;x&space;+a_1&space;\ddot&space;x)=-\eta&space;\text{sign}(s)$

=>$C(a_0&space;f_1&space;+a_1&space;(f_2+u)))=-\eta&space;\text{sign}(s)$

=>$u=((-\eta&space;\text{sign}&space;(s))/C&space;-&space;a_0&space;f_1)/a_1&space;-f_2$

With this control input, we can control a nonlinear system with SMC.

it is difficult to explain quickly within short explanations…., if you need more explanation or questions, please leave me a reply.

Phase plane analysis and Matlab code toolbox

Phase plain analysis is a useful visualization tool to understand the characteristics of systems including not only linear system but also nonlinear system. For example, we can determine stability of the system from this phase plane analysis.

The attachment file <here> is Matlab toolbox to draw phase plain. The attached file includes a simple demo and the below is the result. You can draw phase plane, magnify where you are interest recursively. You can see how to use the Matlab code in the following Youtube video.

How to draw?

Given,

$\dot{x}_1=f_1(x_1,x_2)$

$\dot{x}_2=f_2(x_1,x_2)$

we can find the below equation

$\frac{dx_2}{dx_1}=\frac{f_2(x_1,x_2)}{f_1(x_1,x_2)}$

From $\frac{dx_2}{dx_1}$, we can find the direction of the phase change at the point of $(x_1,&space;x_2)$.