1.INTRODUCTION
Value at Risk has received much attention in recent years due to its importance of the link between capital and the amount of risk that the financial institution can tolerate; it does not derive from supervisors’ constraints only (Basel Committee on Banking Supervision, BCBS) but is instead on the central of the risk management. In addition, Value at Risk (VaR) is a standard tool used to evaluate market risk and to estimate future losses on a portfolio of financial assets (or a single financial asset) with a confidence interval and during a period of time (Corkalo, 2011). According to this unit of measurement, level of losses in the exceptional periods is larger than normal periods, also, it facilitates the market risk measurement by a single number easy to interpret (see Linsmeier and Pearson, 1996; Stavroyiannis and Zarganas, 2013).
Consequently, much research has focused on required economic capital as an internal estimate of the capital financial institution needs, using different approaches. The main difficulty is to produce a correct and effective “VaR implementation,” then to adopt internal and more sophisticated models, under certain constraints and after supervisory validation, with the expectation that valid approaches could propose, on a regular, a lower capital charge than basic ones. BCBS (2009) indicates that to evaluate market risk using their internal models, banks should be able to manipulate sophisticated models in many areas such as: risk supervision, audit control, etc. Many studied suggested that a potential cause of increasing the risk of loss is resulting from inadequate systems (as failed internal processes) or from external events.
It appears that actual regulations and standard approaches for estimating the VaR, based mainly on the Normal assumption, have been invalidated by many studies (see Berkowitz and O’Brien, 2002; Carol and Sheedy, 2008) as they strongly underestimate the extreme events observed in the market. The successive financial crises led to a greater attention to modelling the tail behaviour of the induced returns distributions, and to the use of conditionally heteroskedastic latent factor model as a central concept in risk management.
The purpose of this research is to apply the value at risk methodologies using the classical MonteCarlo approach (CMC), the probabilistic Factor Analysis model (FA), the Mixture of Factor Analyzers model (MFA), the univariate GQARCH (1, 1) model and our proposed conditionally heteroskedastic latent factor model (CHFM), and to discuss their consequences on the Tunisian FX decisionmaking process. This article deals with risk, capital requirement, and the relationship between them, the intent to link VaR measurement methodologies with their impact on internal processes for the Tunisian FX Market, therefore, the VaR numbers generated by them are also different. The selection of an appropriate method to estimate VaR is thus difficult and complicated at the same time. Hence it becomes necessary to Backtest these VaR methods for exceptions in order to judge their performance for the Tunisian FX Market over a time period. This should give us some ideas of which method would be able to better satisfy the Tunisian FX market needs.
The paper is structured as follows: After the introduction, we present the specificity of the Tunisian FX Market. Section 3 deals with the empirical literature and discusses the five approaches that can be used to evaluate the performance of a VaR model using the Backtesting models. In section 4, we introduce our dataset and debate the statistic characteristics. Section 5 discusses the performance of a VaR model such testing and the consequence that we draw from the study over the FX Market. Perspectives and conclusions are then summarized in the last section.
2.THE CHARACTERISTIC OF THE TUNISIAN FOREIGN EXCHANGE MARKET
Nowadays, the fluctuation of a country’s exchange rates depends mainly on its macroeconomic indicators and on its financial stability (Aron et al., 2014; Samson, 2013). For that reason, an assessment of risks related to exchange rates is required. The first objective of currency risk management is to take into account the negative effect of daily exchange rate volatility. In addition, managing foreign exchange risks presents one of the most significant and persistent problems for Tunisian financial institutions. Then, the choice of an exchange rate regime is great importance; it calls into question the economic policy of a country, its room for maneuver and its mode of macroeconomic adjustment. The choice of FX regime is a characteristic behaviour of a system, which maintained and adopted by mutually reinforced processes, as the absence of adequate foreignexchange rate implementation, is one of the fundamental factors that have led to major financial losses among intuition in many countries.
The Tunisian exchange rate policy is very active; it is found that Tunisia had from 1990 till 1999 a “crawling peg exchange rate regime.” Since 2000, following the IMF recommendation, the Tunisia central bank has reduced its intervention in the FX market and permitted for more flexibility in the exchange rates by applying a managed float regime. Indeed, Tunisian adopted “managed floating with nopredetermined path” for the exchange rate from 2000 till 2001, to be changed again to a “crawling peg” from 2002 till 2004. From 2005 till Mai 2007 Tunisia returned to Managed floating regime with nopredetermined path for the exchange rate, and in 2008 it adopted “Conventional pegged arrangement against a composite” (IMF). These different regimes were considered as a transitional step to a freefloating exchange rate regime. Indeed, from 2009 till 2016 the official exchange rate regime applied in Tunisia consists of a “managed Floating exchange rate regime.” The Tunisian exchange rate is, according to the International Monetary Fund, more flexible, but not sustained. Many international institutions, including the IMF, supporter greater flexibility in the exchange rate to reduce tensions on reserves. However, the difficulty is that the Tunisian dinar has become more volatile despite the intervention of the Central Bank on the foreign exchange market to avoid a more pronounced depreciation. It is true that the exchange rates of the Tunisian dinars were determined by the interbank market. In this market, commercial banks, including offshore banks conducted their transactions using free nego tiated rates for their resident and nonresident clients. No limit is set for the difference between the BidAsk spread. The Central Bank of Tunisia (BCT) intervened in the market and published bank notes as indicative, at the latest on the next day, the exchange rate for interbank foreign exchange. In other words, the dinar exchange rate is supported by the central bank’s interventions in the foreign exchange market to avoid a more pronounced depreciation. The “managed Floating exchange rate regime” has led to more volatility and persistence of shocks, then to understand the implications of VaR risk modeling, it is interesting to provide specific responses to the way of managing FX risk. The comprehension of these conditions requires a corporate structure that encourages precise assessments of foreign exchange risk exposure on one hand and the application of successful foreign exchange trading activities on the other. As a result, in this paper, we provide exchange risk management methodologies based on VaR by applying five different approaches (the CMC, the FA, the GQARCH, the MFA and the CHFM models), which can be improved to the specificity of the Tunisian FX perspective. Therefore, we compare the performance of VaR models in order to identify the best VaR, as practiced on a daily basis by major international banks, dealers and brokers.
3.EMPIRICAL LITERATURE AND METHODOLOGY
Many theoretical as well as empirical investigated the application of VaR using different classical and recent approaches (Parametric Approaches, Nonparametric approaches and Monte Carlo Approach…) to evaluate the FX rate risk (for instance, Akbar and Chauveau, 2009; Akhtekhane and Mohammadi, 2012; Batten et al., 2014; Fiksriyoso and Surya, 2013; Rejeb et al., 2012; Salhi et al., 2016; Tokmakçıoğlu, 2009). As can be seen, the classical VaR measures, assumed that the return distribution of financial factor risk is normally distributed, thus, the skewness and heavy tail are the two important characteristics of observed fluctuation FX Market. Provoked by these findings and in order to illustrate more light on this issue and to address the significant heterogeneity observed across the FX rates, we depart from the preview literature and employ the conditionally heteroskedastic latent factor model as modelling dependency for the risk management. Factor models have been established for various reasons: first, these models are understandable and simple to manipulate (The fault times are independent conditionally to a random factor), second they respect the financial intuition (the dependence result of a nonmutualist systemic risk) and finally they take careful into consideration the dependency of parameters risk factors. The market risk factors in this research are the most representative currencies of the Tunisian foreign debt, namely: TND versus USD, TND versus EUR and TND versus JPY. Next, the VaR is estimate with five different VaR approaches (CMC, FA, MFA, GQARCH and CHFM) and three confidence levels (1%, 2% and 5%).
3.1.VaR Models and Parameter Estimates
The last decade, as we can see above, several empirical studies have been developed comparing different modelling methods to identify how to evaluate the VaR. In all these studies, the basic step to make a VaR measurement for a portfolio of assets, was the reconstruction of the returns probability distribution during the holding period with a confidence level. Then, the first step in this study is to calculate the returns of the Tunisian exchange rates as follows:
where p_{t} is the daily closing exchange rate at time t. This quantity can be seen as the logarithm of the geometric growth and is known in finance as continuous compounded returns.
According to Jorion (2007), VaR is defined as: “a method of assessing risk that uses standard statistical techniques used routinely in other technical fields. Loosely VaR summarizes the worst loss over a target horizon that will not be exceeded with a given level of confidence.” So the basic parameters of this unit measurement are confidence level and holding period. The most important advantage of VaR is being prepared for potential negative outcomes and so avoids them.
Mathematically, Jorion (2007) expressed VaR as:
where L is the set of losses and confidence level α ∈(0,1).
To illustrate the dynamic nature of the VaR in the Tunisian context, we used our new simulation approach based on the conditionally heteroskedastic latent factor model and the following competing statistical models:
3.1.1.The Classical Monte Carlo Approach
The Monte Carlo simulation is recognized as the optimal quantitative methodology for measuring the Value at Risk. Alexander (2008) showed that Monte Carlo VaR approach is very flexible and several assumptions can be attributed to the multivariate distribution of risk factor returns. This simulating method is able to detect or/and specify the possible changes in the market risk factors through the employment of a statistical distribution.
Ben Rejeb et al. (2012), made a comparative analysis between four risk measurement models: Historical simulation, Variance Covariance, Bootstrapping and Monte Carlo simulation, to assess the foreign exchange risk in the Tunisian exchange market. In their empirical analysis, they found that VaR estimates related to three currencies (USD, EUR, JPY) from Monte Carlo simulation approach and Bootstrapping method were very similar.
In this approach, the excess return of the portfolio at present time t will be denoted by ${R}_{t}^{p}$. Let us assume that ${R}_{t}^{p}$ depends on q risk factors (foreign exchange rates). Then a Monte Carlo computation of the VaR would consist of the following steps:

Choose the level of confidence 1−α to which the VaR refers.

Simulate the evolution of the risk factors from time t to time t + 1 by generating qtuples of pseudo random numbers from the multivariate normal distribution defined by θ and Σ, where θ is the (q ×1) vector of average excess returns over the Tunisian exchange rates, and Σ is the (q × q) variance covariance matrix of excess returns. Label these draws ${y}_{1t+1}^{s},\text{\hspace{0.17em}}{y}_{2t+1}^{s},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}{y}_{qt+1}^{s}\left(s=1,\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}N\right)$.

Calculate the N different values of the portfolio at time t + 1 using the values of the simulated q tuples of the risk factors ${R}_{t+1,1}^{p},\text{\hspace{0.17em}}{R}_{t+1,2}^{p},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}{R}_{t+1,N}^{p}$ where
$${R}_{t+1,s}^{p}={\delta}_{1}{y}_{1t+1}^{s}+{\delta}_{2}{y}_{2t+1}^{s}+\cdots +{\delta}_{q}{y}_{qt+1}^{s}$$and ${\delta}_{1},\text{\hspace{0.17em}}{\delta}_{2},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}{\delta}_{q}$ are the portfolio weights for the q assets.

Ignore the fraction of the α worst returns ${R}_{t+1,s}^{p}$ .. The minimum of the remaining ${R}_{t+1,s}^{p}$ is then the VaR of the portfolio at time t.
3.1.2.Latent Factor Model
Factor analysis is a statistical method for modelling the covariance structure of high dimensional data using a small number of unobservable or latent variables (McLachlan and Peel, 2000). It can be described by the following generative model:
where
$\forall t=1,\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}T,$
is the vector of k common latent factors (where I_{k}, denotes the (k × k) identity) and y_{t} is a qdimensional observation vector. The covariance structure is captured by the factor loading matrix A. The mean of the observations is determined by the vector of specific or idiosyncratic factors modelled as a multivariate normal with mean vector θ and a diagonal covariance matrix Ψ :
The observation process is expressed as a conditional likelihood, and is given by:
In addition, the observation likelihood is multivariate normal with mean vector θ and covariance matrix:
To obtain maximum likelihood and then estimate model parameters Θ ={θ , A, Ψ} we use the iterative EM algorithm (see Dempster et al., 1977; McLachlan and Krishnan, 2008).
3.1.3.Mixture of Factor Analyzers
Mixture of factor analyzers represents a finite mixture of linear submodels for the distribution of the vector of observed data y_{t} given the latent factors f_{t}. According to this approach, modelling the distribution of the observed data can be made as follows:
for t = 1, …, T, where f_{jt} is a kdimensional (k < q) vector of common latent factors and A is the (q × k) matrix of factor loadings. The vector of common latent factors f_{jt} is distributed N(0, I_{k}), independently of the vector of specific or idiosyncratic factors ε_{jt}, which is distributed N(θ_{j}, Ψ_{j}), where Ψ_{j} is a diagonal matrix (j =1, …, M). Thus, the MFA is given by
where the jth componentcovariance matrix Σ_{j} has the form
and N(y_{t} ; θ_{j}, Σ_{j}) denotes the multivariate normal density function with mean θ_{j} and covariance matrix Σ_{j}. The parameter vector Θ now consists of the elements of the θ_{j} , the Aj , and the Ψj , along with the mixing proportions π_{j} (j =1,⋯, M −1), on putting ${\pi}_{M}=1{\displaystyle {\sum}_{j=1}^{M1}{\pi}_{j}}$.
To estimate MFA parameters for the observed data y_{t} it’s possible to use the multicycle Alternating Expectation Conditional Maximization (AECM) algorithm (see Meng and Van Dyk, 1997; McLachlan and Krishnan, 2008).
3.1.4.The GQARCH Model
Black (1976) and Christie (1982) found evidence that stock returns are negatively correlated with return volatility. This asymmetry (often related to as financial leverage or volatility feedback) means that volatility tends to rise in response to a negative shock and to fall in response to a positive shock. The GARCH model does not account for this finding and assumes symmetric impacts of positive and negative shocks on future volatility. The quadratic version of the GARCH model (GQARCH) ac counts for these asymmetries (see Engle et al., 1990; Sentana, 1995). This specification produces a symmetric curve around α
where ε_{t} ~ N(0,1) and the conditional volatility h_{t} of the portfolio is given by:
If α is negative this means that negative shocks increase the conditional volatility h_{t} more than positive shocks. For all time t and ∀l =1, …, the anticipated volatility is given by:
3.2.Conditionally Heteroskedastic Latent Factor Model
The model that we propose to study supposes that excess returns depend on unobservable factors. We allow the conditional variances of the underlying factors to vary over time and parameterize this in terms of GQARCH (1, 1) processes.
3.2.1.Model Specification and Factor Structure
Consider the following multivariate model:
where
${f}_{t}={H}_{t}^{1/2}{f}_{t}^{*},\text{\hspace{0.17em}\hspace{0.17em}for\hspace{0.17em}\hspace{0.17em}}t=1,\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}T\text{\hspace{0.17em}\hspace{0.17em}and}$
where y_{t} is the (q×1) vector of observable random variables (financial returns in our case), f_{t} is a (k × 1) vector of unobserved common factors, A is the associated (q×k) matrix of constant factor loadings, with q ≥ k, ε_{t} is a (q×1) vector of idiosyncratic noises, which are conditionally orthogonal to f_{t}, θ and Ψ are, respectively, the (q×1) mean vector and (q × q) diagonal and definitepositive covariance matrix of constant idiosyncratic variances, H_{t} is a (k × k) diagonal positive definite matrix of timevarying factor variances. In particular, it is assumed that the common factor variances follow GQARCH (1, 1) processes, so that its ith element is
where the dynamic asymmetry parameter γ_{i} is usually different from 0, allowing for the possibility of a leverage effect (see Sentana, 1995). We can see from this model that if f_{it}_{1} > 0, its impact on h_{it} is greater than in the case of f_{it}_{1} < 0 (assuming that γ_{i} and α_{i} are positive). To guarantee the positivity of the conditional common variances and the covariance stationarity, we impose the constraints ${\omega}_{i},\text{\hspace{0.17em}}{\alpha}_{i},\text{\hspace{0.17em}}{\delta}_{i}>0,\text{\hspace{0.17em}}{\gamma}_{i}^{2}\le 4{\omega}_{i}{\alpha}_{i}$ and $\text{\hspace{0.17em}}{\alpha}_{i}+\text{\hspace{0.17em}}{\delta}_{i}<1,\text{\hspace{0.17em}\hspace{0.17em}}\forall i=1,\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}k$. In this case, extension to higher order GQARCH is straightforward. As a matter of fact, there is nothing particular about GQARCH models from the EM algorithm point of view. The problem is that nonquadratic ARCH models are difficult to handle via the Kalman filter (see Harvey et al., 1992).
2.3.2.Maximum Likelihood Estimation
2.3.2.1.Dynamic StateSpace Representation
The conditionally heteroskedastic latent factor model in (1)(2) can be regarded as a random field with indices i = 1, …, q and t = 1, …, T. Therefore, it is not surprising that it has a timeseries statespace representation, with f_{t} as the state variable. The measurement and transition equations are given by:
where ${\epsilon}_{t}{Y}_{t1},\text{\hspace{0.17em}}{F}_{t1}\sim N\left(\theta ,\text{\hspace{0.17em}}\Psi \right)$, and ${f}_{t}{Y}_{t1},\text{\hspace{0.17em}}{F}_{t1}\sim N\left(0,\text{\hspace{0.17em}}{H}_{t}\right)$. In this case ${Y}_{t1}=\left\{{y}_{t1},\text{\hspace{0.17em}\hspace{0.17em}}{y}_{t2},\text{\hspace{0.17em}}\cdots \right\}$ and ${F}_{t1}=\left\{{f}_{t1},\text{\hspace{0.17em}\hspace{0.17em}}{f}_{t2},\text{\hspace{0.17em}}\cdots \right\}$, i.e. the information set D_{t}_{−1} that we would have at time t − 1. Hence the prediction equations are: $E\left({f}_{t}{D}_{t1}\right)={f}_{tt1}=0,\text{\hspace{0.17em}}E\left({y}_{t}{D}_{t1}\right)={y}_{tt1}=\theta $ and $Var\left({f}_{it}{D}_{t1}\right)={h}_{itt1}$ where
${f}_{it1t1}=E\left({f}_{it1}{D}_{t1}\right)$, ${h}_{itt1}$ the ith diagonal element of ${H}_{tt1}$H_{tt} and ${h}_{it1t1}=Var\left({f}_{it1}{D}_{t1}\right)$ is the ith diagonal element of ${H}_{t1t1}$. Hence, the predicted variance is the conditional expectation of the predicted volatility $\left(E\left({h}_{it}{D}_{t1}\right)\right)$. The term ${h}_{it1t1}$ comes from the fact that $E\left({f}_{it1}^{2}{D}_{t1}\right)=Var\left({f}_{it1}{D}_{t1}\right)+E{\left({f}_{it1}{D}_{t1}\right)}^{2}={h}_{it1t1}+{f}_{it1t1}^{2}$. This specification can be easily evaluated via the Kalman filter. Note that the measurability of h_{it}_{1} with respect to D_{t}_{−1} is achieved in this model by replacing the unobserved factors by their best (in the conditional mean square sense) estimates, and including a correction in the standard ARCH terms which reflects the uncertainty in the factor estimates. The following are the updating equations:
and
where ${\sum}_{tt1}=A{H}_{tt1}}{A}^{\prime}+\Psi $. Given the degenerate nature of the (timeseries) transition equation, smoothing is unnecessary in this case, so that ${f}_{tT}={f}_{tt}$ and ${H}_{tT}={H}_{tt}$.
2.3.2.2.The EM algorithm
Actually, this algorithm could still be applied to our model if we knew the conditional variance parameters, even though they are not zero. To see why, assume temporarily that the f_{t}’s are observed. Under normality, we have from (1)(2) that:
where $\Delta =\left[\theta A\right]$ is the q × (k + 1) matrix of “regression” parameters, and ${\tilde{y}}_{t}{}^{\prime}=\left[1{{f}^{\prime}}_{t}\right]$. Therefore, the density function of the tth observation, conditional on the information “available” at time t, can be factorized as:
Hence, ignoring initial conditions, and assuming the Ψ has full rank, the joint loglikelihood function would be given by:
when the f_{t}’s are unobservable, we can apply the EM algorithm by taking the expected value of the above likelihood function with respect to Y_{t} and the current parameter estimates (Estep: $E\left(L\left(\Theta y,\text{\hspace{0.17em}}f\right){\Theta}^{\left(i\right)}\right)=Q\left(\Theta {\Theta}^{\left(i\right)}\right)$), and then maximize by solving the first order conditions (Mstep).
EStep:
If we let ${D}_{Ti}=\left\{{Y}_{T},\text{\hspace{0.17em}}{\Theta}^{\left(i\right)}\right\}$, then the conditional expected value of the complete loglikelihood function is given by:
MStep:
In this step, the maximization of the function $Q\left(\Theta {\Theta}^{\left(i\right)}\right)$ with respect to Δ and Ψ can be done ignoring the last two terms. Provided that these parameters increase the whole expression, the generalized EM principle still applies. By annulling the first order conditions we find:
and
Hence, we just need to find the conditional expectations in (4) and (5). These conditional expectations can be derived using the Kalman filter. If ${f}_{t}={H}_{tt1}^{1/2}{f}_{t}^{*}$, the model would be conditionally Gaussian and the Kalman filter produces the exact conditional expectations. However, if H_{t} is a function of unobserved variables, as in (1)(2), the filter only produces approximate values. In this case, if we let $E\left({\tilde{y}}^{\prime}{}_{t}{D}_{t}\right)={\tilde{y}}_{tt}^{\left(i\right)}{}^{\prime}=\left(1,\text{\hspace{0.17em}}{f}_{tt}^{\left(i\right)}{}^{\prime}\right)$ and $E\left({\tilde{y}}_{t}{\tilde{y}}^{\prime}{}_{t}{D}_{t}\right)={\Omega}_{tt}^{\left(i\right)}$ we get:
therefore
and by using this equation, we determine Ψ^{(i+1)},
These equations reduce to the usual ones when the variance parameters are zero.
3.2.2.3.Implementation Details
Unfortunately, the conditional variance parameters ω, γ, α and δ are in practice unknown. The most obvious possibility is to apply the EM algorithm to estimate these as well. However, this is not easy. First, il follows from (3) that conditional expectations of nonlinear functions of f_{t} are needed. Unfortunately, these expectations are practically impossible to be found exactly. Second, and more importantly, the first order conditions for the GQARCH parameters result in a very complicated simultaneous equation system with no closed form solution.
An alternative possibility is based on the following idea. We assume that the parameters in Δ and Ψ are known, or else, that they are kept constant at their values from the previous iteration so that effectively the first part of the above loglikelihood function is constant. Taking conditional expectations of (3) we have:
One problem with this expression is that in general there are no simple exact formulae for the expected value of nonlinear functions of f_{t}. However, one can approximate the above expectations by ignoring Jensen’s inequality effects as:
where ${h}_{jtt1}^{\left(i\right)}={\omega}_{j}+{\gamma}_{j}{f}_{jt1t1}^{\left(i\right)}+{\alpha}_{j}\left({f}_{jt1t1}^{\left(i\right)2}+{h}_{jt1t1}^{\left(i\right)}\right)+{\delta}_{j}{h}_{jt1t2}^{\left(i\right)},\text{\hspace{0.17em}}{h}_{jtt}^{\left(i\right)}$ is the jth diagonal element of H_{tt} and ${f}_{jtt}^{\left(i\right)}$ is the jth element of f_{tt}, both evaluated at the (i)th iteration. Then, one could find ${\omega}_{j}^{\left(i+1\right)},\text{\hspace{0.17em}}{\gamma}_{j}^{\left(i+1\right)},\text{\hspace{0.17em}}{\alpha}_{j}^{\left(j+1\right)}\text{and\hspace{0.17em}}{\delta}_{j}^{\left(i+1\right)}$ by iteratively maximizing this approximation to the expected loglikelihood function. However, since we are not dealing with the exact expression for the expected loglikelihood function, it is not clear that L(ΘY) is going to be maximized in this way. In the case where the data generating process is ${f}_{t}={H}_{tt1}^{1/2}{f}_{t}^{*}$, then the loglikelihood function would be given by:
Treating once more the parameters Δ and Ψ as known, and taking conditional expectations of (3) we get:
Now ${\omega}_{j}^{\left(i+1\right)},\text{\hspace{0.17em}}{\gamma}_{j}^{\left(i+1\right)},\text{\hspace{0.17em}}{\alpha}_{j}^{\left(j+1\right)}\text{and\hspace{0.17em}}{\delta}_{j}^{\left(i+1\right)}$ can be obtained by numerical maximization of the above exact expected loglikelihood function, but notice that f_{jt−1t−1} and hjt−1t−1 have to be reevaluated once per parameter per iteration. Hence, the Kalman filter has to be used as often as if we were to maximize the loglikelihood function of the observable L(ΘY) directly.
3.2.2.4.Monte Carlo Simulations
The basic problem of this study, is determining VaR for a portfolio of exchange rates via Monte Carlo simulations. The latter aims at generating risk measures through a statistical model. Our contribution is that this simulating method uses CHFM to generate different scenarios for the risk factors and combine these scenarios to generate correlated and heterogeneous future returns. The return of the portfolio at present time t will be denoted by ${R}_{t}^{p}$. Let us assume that ${R}_{t}^{p}$ depends on q risk factors, then the main steps for doing this VaR estimation are as follows:
4.EVALUATION MODELS: BACKTESTING
These methods provided solutions for the comparison between the different approaches (Nieppola, 2009). These methods of Backtesting are analysed and estimated with special attention. They compared the OutofSample VaR estimates with Christoffersen tests (Kupiec Test, Independence test and Joint test). Then, a principal function is reported on a binary loss function that treats any loss larger than the VaR estimate as an ‘exception.’ In this case, it’s possible to see whether failure rates are in accordance with selected the confidence level (Miletic and Miletic, 2015) through the Unconditional Coverage test. In addition, an accurate VaR model shows the number of exceptions that are independent all the time: Independent test (Evers and Rohde, 2014). Finally, we examine both features of Conditionality and movements in data all the time through the joint test named Conditional Coverage test (Jorion, 2007).
4.1.Kupiec Test or PoF Test
Kupiec test also named the Proportion of Failure (PoF), it examines the unconditional coverage feature (Kupiec, 1995). In this test, the main parameters are n (number of exceptions) and T (total number of observations) to quantify $\widehat{\alpha}$ (observed proportion of failures: $\widehat{\alpha}=n/T$ as well as the confidence level α (expected proportion of failure). The null hypothesis is that the observed probability of exception occurring is equal to the expected.
The PoF test statistic is calculated by the likelihood Ratio (LR_{PoF}), the latter function can be written as:(6)
4.2.Independence Test
It takes into account the independence of exceptions. The independence test statistic is evaluated by the likelihood Ratio (LR_{ind})(7)
The null hypothesis is that the probability of exception occurring is independent on the information whether the exception has occurred also previous day:
where π_{i} is the probability of having a failure conditional on state i and on previous day and n_{ij} (i = 0,1; j = 0,1) is the number of days in which j is achieved in one day, however i was at the previous day.
4.3.Conditional Coverage Test (Joint Test)
According to this statistical test, an accurate VaR model includes both unconditional coverage and independence between exceptions (Christoffersen, 1998).(8)
5.THE DATASET: TUNISIAN FOREIGN EXCHANGE RATES
5.1.Data Description
The models presented in the empirical literature are applied to a portfolio composed by the most representative currencies, in the same approach for the Tunisian context, we have opted for the most three foreign rates treated into the Tunisian FX, namely: TND/USD, TND/ EUR and TND/JPY. The data set contains 2,251 daily exchange rates from January 08, 2008 to December 30, 2016. Our sample consists of a long data that includes periods of sensible fluctuation and thus enables us to examine how the CHFM approach perform during such periods. We have opted for daily sampling frequencies. The exchange rate series were extracted from an historical exchange database provided by the Tunisian central bank and FX database. In order, to evaluate VaR models, exchange rates are transformed into logreturns.
As can be seen, Figure 1, illustrates the movements of logreturns for exchange rates. In this case, it becomes very clear that the Tunisian currency returns changes as its volatility changes. Therefore, higher periods of volatility are followed by higher periods of low returns and vice versa, also we detect the presence of a comovement between the different FX rates. This relationship is relatively apparent during the observation period and it is the central fact of our investigation into the Tunisian FX market. We see that the behaviour of logreturns for Tunisian currencies is highly volatile between 2008 and at the beginning of 2009. As mentioned above, this period is a considered as a transitional step to a managed floating exchange rate regime. Such period of transition is characterized by a series of significant changes that lead to significant volatility and repetitive shocks.
5.2.Descriptive Statistics
In this part, we describe statistical features of logreturns related to exchange rates using descriptive statistics, which are presented in Table 1:
The table demonstrates that logreturn series of exchange rates have positive mean daily returns. Then, the returns of TND/EUR and TND/JPY were positively skewed, but they are negative for the TND/USD. The null hypothesis for skewness coefficients that conform to a normal distribution’s value of zero has been rejected at the 5% significance level; negative skewness indicates that the distribution has a long left tail, which indicates a high probability of observing large negative values. In addition, the returns for all currencies also exhibit excess kurtosis, particularly for TND/USD and TND/JPY.
The null hypothesis for kurtosis coefficients that conform to the normal value of three is rejected for all exchange rates; which is one of the Tunisian FX market features that exhibit important kurtosis. According to Jarque Béra normality test, the null hypothesis of normality is rejected (for 95% significance level, critical value is 5.9668).
In Figure 2 we depict the histograms of individual time series. For each histogram, we also superimpose the normal density function with the same mean and variance. Also plotted in Figure 2 are the normal probability plots for the three returns. The purpose of a normal probability plot is to graphically assess whether the data could come from a normal distribution. If the data are normal the plot will be linear. Other distribution types will introduce curvature in the plot. It is clear from this figure that the returns within the given holding periods are not normally distributed. Especially the tails of the return distributions are heavier than those of the normal distribution, which is highlighted explicitly in the normal probability plots: the left tail (red points) is above (larger) the blue line, and the right tail (red points) is below (negatively larger) the blue line.
5.3.Tests of Stationarity
We apply statistical tests to confirm the stationarity property for our log return series, namely: KPSS (Kwiatkowski PhillipsSchmidtShin, 1992) tests: aims at testing the null hypothesis that observable time series is stationary, and PP (Phillips and Perron, 1988) test: used to test the null hypothesis of unit root (Non stationary time series).
Table 2, illustrates the results of KPSS test for the logarithmic returns for which the null hypothesis could not be rejected. However, when comparing PP statistics as specified in the same table with critical value of 5% level, the null hypothesis of unit root could be rejected. By these two statistical tests, we conclude that logreturns of Tunisian currencies are stationary. All these results highlight the usefulness of CHFM, which take into account the stationarity, and the heteroskedasticity and asymmetric return distributions of the Tunisian currencies.
5.4.Correlation Analysis
In order to explain the interdependence between movements of the FX rates, we use correlation coefficient. The results from 2008 to 2016 can be seen in Tables 3:
TND/USD and TND/EUR have a strong negative correlation (0.5772), this coefficient demonstrates that these two FX rates are not behaving similarly. In other words, this can be expressed as follows: TND appreciates versus USD as well as depreciates versus EUR and vice versa. Therefore, the gain obtained from one currency (TND/USD) will recover the loss of the other currency (TND/EUR). Since the correlation coefficient of TND/ JPY and TND/EUR (0.4256), we understand the opposite movements of the Euro and JPY currencies. Finally, TND/USD and TND/JPY have a quite positive correlation coefficient (0.5239), one FX rates increases simultaneously with the other.
5.5.Empirical Results: Identification of the Best VaR Model
The objective of the latest part is to identify the most appropriate approach that can be adopted to forecast the VaR for the portfolio of Tunisian FX rates. However, we need first to examine the performance of VaR models by applying the method of rolling sample. To estimate the VaR, we divide dataset of Tunisian FX rates returns into two Parts: In Sample, in which the estimated period is from 08/01/2008 until 05/01/2009, (250 observations) and the OutofSamplealso called the test period that begins on 06/01/2009 and ends on 30/12/2016 (2000 observations). To establish the VaR number of the public Tunisian external debt portfolio for a given confidence level (1−α ), we used the following portfolio weights: δ_{1} = 15% (for the TND/USD), δ_{2} =75% (for the TND/EUR) and δ_{3} = 10% (for the TND/JPY).
The main difficulty in using Monte Carlo simulations for VaR inference is the amount of time that takes to compute correct estimates, especially, when the portfolio consists of many risk factors and/or when the confidence level is high. To test the dependence of the numerical results on the number of Monte Carlo steps, we did this investigation for various numbers of Monte Carlo steps ranging from 100 to 2000. For the Tunisian public debt portfolio, we present our results of the underestimated losses, obtained from a CMC simulation on the one hand and from Monte Carlo simulations using FA, GQARCH, MFA and CHFM approaches, in Figure 3. From this figure, it seems that when a number of Monte Carlo steps of about 500 is reached, the resulting VaR do not change significantly under any additional increase of the number of Monte Carlo steps. This behaviour can be observed for the CHFM approach and for each confidence level.
Therefore, with a rolling sample, we estimated parameters of VaR models (CMC, FA, MFA, GQARCH (1, 1) and CHFM) using 500 simulations for the different risk levels. The next step aims to check the stability and reliability of the results over time through the Backtesting procedure. In this case we apply Kupiec’s PoF test, (Unconditional Coverage test), Independence test and Conditional Coverage test. The VaR numbers derived from the three approaches present a wide range of consequences. The results of the Backtesting are presented in the Table 4, Table 5, Table 6 and Table 7.
Moreover, we can see in Figure 4 a faster volatility in 99%, 98%, 95% and 90% VaR estimates for the CHFM and MFA models than CMC, GQARCH (1, 1) and FA models. In addition, CMC, GQARCH and FA methods gives VaR estimates lower than CHFM and MFA approaches.
Evidently, VaR through CHFM and MFA using Monte Carlo Simulation shows improvement in management of exchange risk exposure compared to the CMC, FA and GQARCH (1, 1) approaches over time. From year 2009 to 2016, as expected the CMC method has poor results. Indeed, this approach underestimated risk, it presented poor results for the different risk levels 1%, 2%, 5% and 10% using the Christoffersen test (the test for conditional coverage was respectively 32.215, 24.418, 19.229 and 17.997).
The VaR through CHFM has suffered due to the fluctuation of the exchange rates of three currencies. Consequently, the results conclude in favour to reject such models credibility regarding its poor significant level. It is noticeable from the FX series that the VaR are affected by a significant fluctuation of the volatility (see Figure 4). Thus, the main disadvantage of this method is its lack of reactivity, which suggest in part, the use of additional modelling structures that incorporates the interaction between the three risk factors (TND/USD, TND/EUR, TND/ JPY). Such dependence can be represented by more adequate approaches that can estimate the ValueatRisk in spite of the assumption of heteroskedasticity of the risk factors. We know that FA method explains the correlation between the variables observed from a minimum number of factors but it does not take into account the heterogeneity and conditional heteroskedasticity in the volatility of the different currencies. As noted before, CHFM and MFA using Monte Carlo simulations are more suitable to construct the joint multivariate distribution of losses and are more flexible and realistic in terms of allowing a wide range of dependence structure.
This improvement in management of FX exchange risk was possibly mainly due to the changing of the FX regimes over time. Evidently, we have validated, for the three levels that CHFM and MFA methods adjusted rapidly compared to the other competing methods (see Figure 4), in order to better estimate the fluctuation of the volatility for the OutofSample period and to detect sufficiently the fluctuation of the VaR. The first observation of Table 4, Table 5 and Table 6, and Figure 4, shows that the VaR violations fluctuated with the volatility of the respective period. As mentioned earlier these models are able to capture adequately some particular characteristics of the portfolio series such as changes in volatility, heterogeneity and dependency.
It is noticed from the results that the detection of the first violation is improved. In fact, we saw that for the 10% risk level, both CHFM and MFA models capture better losses on the 4th day compared to the CMC approach 11 day, FA model 9th day and GQARCH (1, 1) model 9 day. However, for the level of 1% as well as 2% it is the 13th day. For a 1% coverage rate, the results are similar for the CHFM, GQARCH (1, 1) and MFA methods, more exactly the LRCC test gives the same significant test 1.1853 and the identical Failure rate, which is estimated to 1.15 %. More precisely, the two VaR sequences (obtained from the CHFM and MFA methods) are too similar for the backtests to discriminate between them.
From Table 5 and Table 6, the tests conclude to the validity of both risk measures 1% and 2% for the CHFM model as well as for the MFA method. However, the results vary severely from one specification to another; more precisely, it is possible to give affirmative conclusions for the portfolio that CHFM gives the best results in terms of Backtesting. Indeed, the unconditional and conditional coverage tests give better results than the MFA model.
Table 5 shows that CHFM model perform better for the FX portfolio at the level 2% as the failure rate has been 2.05%. The CHFM model appears to be remarkably accurate in that case compared to the MFA approach. Indeed, this approach provides accurate VaR forecasts for this portfolio at the confidence levels of 98%. Moreover, it has approved the two tests for “unconditional coverage” and for “independence” respectively (0.0923) and (0.0000), implying a significant Christoffersen Test (1.2047) which is inferior to χ^{2} (5.991).
Table 6 and Table 7 confirm that results were linked to the previous findings in the risk level 2%. In fact, we found that CHFM model performs better the forecast for the FX portfolio for the risk levels 5% and 10% than the MFA model, because we have more significant LRCC (1.1251; 1.626) compared to other method with LRCC (1.8083; 1.8307).
Finally, we can compare the Backtesting results using the following quantity:
where (${\widehat{\alpha}}_{i}$) are the failure rates for each method. In this case, the best model is the one who have the smallest value S . Hence, using the quantity S we can classify our VaR models. It seems from Table 8 that CHFM is the most accurate and consistent approach to forecast correctly the VaR for currency risk, which means according to Table 8 that it’s observed proportion of failure $\widehat{\alpha}$ is very close to expected proportion of failure α (1%, 2%, 5% and 10%). MFA method is the second one, and finally we found that CMC approach has the biggest S.
As a consequence, the comparison between different models with each specification shows that, according to the different measures used for the performance of failure rates forecast and then the Backtesting of the Value at Risk, the CHFM and MFA approaches provide the best OutofSample estimation for the risk levels 1%, 2%, 5% and 10% for the Tunisian FX market. CHFM is ranked in the first position as the results demonstrate the presence of the lower exceptions and the more significant tests, for the different risk levels than the other competitor models. The MFA was ranked in the second position.
6.CONCLUSION
Modeling of financial time series is clearly difficult but not less important part of financial risk management. The difficulties of modeling are caused by the specific characteristics of financial time series, such as heterogeneity, fat tails, conditional heteroskedasticity, volatility clustering and dependency, which cannot be easily modeled. In this paper a new methodology based on the conditionally heteroskedastic latent factor model were proposed and backtested on the Tunisian public debt portfolio. Assuming risk levels of 1%, 2%, 5% and 10%, we have calculated the VaRs for this portfolio on the basis of 250 banking days, using the Classical CMC, FA, MFA, the GQARCH (1, 1) model and our proposed CHFM model. The computations and the corresponding Backtesting of the results have been performed on the basis of historical foreign exchange rates ranging over eight years. This means that our Backtesting statistics are based on 2000 measurements.
More precisely, to forecast VaR though these approaches, analysis is conducted on a test period from 06/01/2009 to 30/12/2016. We proved that CHFM and MFA models give adequate VaR estimates and were the most accurate in assessing the Tunisian currency risk. Our results showed that the CHFM and MFA (particularly CHFM) are ranked in the first position to perform an analysis of the Value at Risk for the 1%, 2%, 5% and 10% risk levels. Such models perform better than other approaches as they give a significant capital allocation and a well estimation of exceptions. This finding is tested empirically using Backtesting techniques under different tests with three level of risk. We notice also, that the CMC and FA models give statistically insignificant estimations, it lacking the property of “correct conditional coverage” capital, thus, the results concluded in favor to reject such models credibility considering its poor significant level.
To be able to decide whether or not one should prefer the conditionally heteroskedastic latent factor model to the traditional models, a supplementary investigation of the speed of the proposed algorithm would be useful. As we considered three risk factors only, a natural extension of studies in this field would have to include four or more risk factors. Furthermore, our model can be generalized to one where one allows the specific factors to be stochastic functions of time. By combining the conditionally heteroskedastic latent factor model with hidden Markov chain models, we can derive a dynamical local model for segmentation and prediction of multivariate conditionally heteroscedastic financial time series (see, for instance, Saidane and Lavergne, 2007, 2008, 2009, 2013). The study of such models would provide a further step in the extension of hidden Markov models to mixed conditionally heteroscedastic latent factor models and allow for further flexibility in the market risk analysis and valueatrisk applications.