## Sampling from Dirichlet Distribution using Gamma distributed samples

There is an algorithm to generate Dirichlet samples using a sampler for Gamma distribution for any $\alpha > 0$ and $\beta > 0$. We will generate Gamma distributed variables $z_k \sim \text{gamma}(\alpha_k,1)$, for $k \in {1,\cdots,d}$, and do the following variable transformation to get Dirichlet samples $x_k = \frac{z_k}{\sum_k z_k}$. First we should demonstrate that this transformation results in Dirichlet distributed samples.

Consider the following tranformation $(z_1,\cdots,z_d) \leftarrow (x_1,\cdots,x_d,v)$, where $x_k = \frac{z_k}{\sum_k z_k}$ and $v = {\sum_k z_k}$. We can rewrite this transformation as $(x_1,\cdots,x_d,v)=h(z_1,\cdots,z_d)$, where $x_k = \frac{z_k}{v}$ and $v = {\sum_k z_k}$. Also we can imediatly calculate the inverse transformation $(z_1,\cdots,z_d)=h^{-1}(x_1,\cdots,x_d,v)$, with $z_k=v x_k$. From the transformation definition we know that ${\sum_{k=1}^d x_k=1}$, implying that $x_d = 1-\sum_{k=1}^{d-1} x_k$ and $z_d=v(1-\sum_{k=1}^{d-1}x_k)$.

## Exercise: gradient of soft-max error function

The soft-max regression model can be used in the $k$ classes classification problem. The model consists of composition of probabilities distribution for each $k$ classes. So, the activation function $h_\theta(x^{(i)})$ is given by:

$h_\theta(x^{(i)}) = \begin{bmatrix} p(y^{(i)} = 1 | x^{(i)}; \theta) \\ p(y^{(i)} = 2 | x^{(i)}; \theta) \\ \vdots \\ p(y^{(i)} = k | x^{(i)}; \theta) \end{bmatrix} = \frac{1}{ \sum_{j=1}^{k}{e^{ \theta_j^T x^{(i)} }} } \begin{bmatrix} e^{ \theta_1^T x^{(i)} } \\ e^{ \theta_2^T x^{(i)} } \\ \vdots \\ e^{ \theta_k^T x^{(i)} } \\ \end{bmatrix}$
And
$h_{\theta_n}(x^{(i)}) = p(y^{(i)} = n | x^{(i)}; \theta) = \frac{ e^{ \theta_n^T x^{(i)} } }{ \sum_{j=1}^{k}{e^{ \theta_j^T x^{(i)} }} }$

The error function is given by:
$J(\theta) = - \frac{1}{m} \left[ \sum_{i=1}^{m} \sum_{j=1}^{k} \delta_{y^{(i)}j} \log \frac{e^{\theta_j^T x^{(i)}}}{\sum_{l=1}^k e^{ \theta_l^T x^{(i)} }}\right]=- \frac{1}{m} \left[ \sum_{i=1}^{m} \sum_{j=1}^{k} \delta_{y^{(i)}j} \log( h_{\theta_j}(x^{(i)}) )\right]$