Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.16 No.3 pp.400-414
DOI : https://doi.org/10.7232/iems.2017.16.3.400

A Monte-Carlo-based Latent Factor Modeling Approach with Time-Varying Volatility for Value-at-Risk Estimation: Case of the Tunisian Foreign Exchange Market

Mohamed Saidane*
College of Business and Economics, Qassim University, Kingdom of Saudi Arabia
Corresponding Author, mohamed.saidane@gmail.com
May 3, 2017 July 14, 2017

ABSTRACT

The normal probability distribution assumption, to model price changes in Finance, belongs to the largest imperfections in the Value-at-Risk (VaR) estimation. In fact, the financial returns are rather distributed leptokurtic than normally and the empirical distributions are often skewed. In these cases, the normal distribution assumption results in over or underestimation of VaR especially when the quantiles are very high/low. Therefore, it is necessary to put emphasis on respecting the leptokurtic and skewed return distribution. In this paper, we propose a new approach for portfolio VaR estimation, which combines the standard latent factor model with the generalized quadratic autoregressive conditionally heteroskedastic model (GQARCH). This new “hybrid” specification provides an alternative, compact, model to handle co-movements, heteroskedasticity and intra-frame correlations in financial data. For maximum likelihood estimation we have used an iterative approach based on an extended version of the Kalman filter algorithm combined with the Expectation Maximization (EM) algorithm. Using a set of historical data, from the Tunisian foreign exchange market, the model parameters are estimated. Then, the fitted model combined with a modified Monte-Carlo simulation algorithm was used to predict the VaR of the Tunisian public debt portfolio. Through a Backtesting analysis, we found that this new specification produces far more accurate forecasts for the VaR compared to the mixture of factor analyzers and other competing approaches.


초록


    1.INTRODUCTION

    Value at Risk has received much attention in recent years due to its importance of the link between capital and the amount of risk that the financial institution can tolerate; it does not derive from supervisors’ constraints only (Basel Committee on Banking Supervision, BCBS) but is instead on the central of the risk management. In addition, Value at Risk (VaR) is a standard tool used to evaluate market risk and to estimate future losses on a portfolio of financial assets (or a single financial asset) with a confidence interval and during a period of time (Corkalo, 2011). According to this unit of measurement, level of losses in the exceptional periods is larger than normal periods, also, it facilitates the market risk measurement by a single number easy to interpret (see Linsmeier and Pearson, 1996; Stavroyiannis and Zarganas, 2013).

    Consequently, much research has focused on required economic capital as an internal estimate of the capital financial institution needs, using different approaches. The main difficulty is to produce a correct and effective “VaR implementation,” then to adopt internal and more sophisticated models, under certain constraints and after supervisory validation, with the expectation that valid approaches could propose, on a regular, a lower capital charge than basic ones. BCBS (2009) indicates that to evaluate market risk using their internal models, banks should be able to manipulate sophisticated models in many areas such as: risk supervision, audit control, etc. Many studied suggested that a potential cause of increasing the risk of loss is resulting from inadequate systems (as failed internal processes) or from external events.

    It appears that actual regulations and standard approaches for estimating the VaR, based mainly on the Normal assumption, have been invalidated by many studies (see Berkowitz and O’Brien, 2002; Carol and Sheedy, 2008) as they strongly underestimate the extreme events observed in the market. The successive financial crises led to a greater attention to modelling the tail behaviour of the induced returns distributions, and to the use of conditionally heteroskedastic latent factor model as a central concept in risk management.

    The purpose of this research is to apply the value at risk methodologies using the classical Monte-Carlo approach (CMC), the probabilistic Factor Analysis model (FA), the Mixture of Factor Analyzers model (MFA), the univariate GQARCH (1, 1) model and our proposed conditionally heteroskedastic latent factor model (CHFM), and to discuss their consequences on the Tunisian FX decision-making process. This article deals with risk, capital requirement, and the relationship between them, the intent to link VaR measurement methodologies with their impact on internal processes for the Tunisian FX Market, therefore, the VaR numbers generated by them are also different. The selection of an appropriate method to estimate VaR is thus difficult and complicated at the same time. Hence it becomes necessary to Backtest these VaR methods for exceptions in order to judge their performance for the Tunisian FX Market over a time period. This should give us some ideas of which method would be able to better satisfy the Tunisian FX market needs.

    The paper is structured as follows: After the introduction, we present the specificity of the Tunisian FX Market. Section 3 deals with the empirical literature and discusses the five approaches that can be used to evaluate the performance of a VaR model using the Backtesting models. In section 4, we introduce our dataset and debate the statistic characteristics. Section 5 discusses the performance of a VaR model such testing and the consequence that we draw from the study over the FX Market. Perspectives and conclusions are then summarized in the last section.

    2.THE CHARACTERISTIC OF THE TUNISIAN FOREIGN EXCHANGE MARKET

    Nowadays, the fluctuation of a country’s exchange rates depends mainly on its macro-economic indicators and on its financial stability (Aron et al., 2014; Samson, 2013). For that reason, an assessment of risks related to exchange rates is required. The first objective of currency risk management is to take into account the negative effect of daily exchange rate volatility. In addition, managing foreign exchange risks presents one of the most significant and persistent problems for Tunisian financial institutions. Then, the choice of an exchange rate regime is great importance; it calls into question the economic policy of a country, its room for maneuver and its mode of macroeconomic adjustment. The choice of FX regime is a characteristic behaviour of a system, which maintained and adopted by mutually reinforced processes, as the absence of adequate foreign-exchange rate implementation, is one of the fundamental factors that have led to major financial losses among intuition in many countries.

    The Tunisian exchange rate policy is very active; it is found that Tunisia had from 1990 till 1999 a “crawling peg exchange rate regime.” Since 2000, following the IMF recommendation, the Tunisia central bank has reduced its intervention in the FX market and permitted for more flexibility in the exchange rates by applying a managed float regime. Indeed, Tunisian adopted “managed floating with no-predetermined path” for the exchange rate from 2000 till 2001, to be changed again to a “crawling peg” from 2002 till 2004. From 2005 till Mai 2007 Tunisia returned to Managed floating regime with nopredetermined path for the exchange rate, and in 2008 it adopted “Conventional pegged arrangement against a composite” (IMF). These different regimes were considered as a transitional step to a free-floating exchange rate regime. Indeed, from 2009 till 2016 the official exchange rate regime applied in Tunisia consists of a “managed Floating exchange rate regime.” The Tunisian exchange rate is, according to the International Monetary Fund, more flexible, but not sustained. Many international institutions, including the IMF, supporter greater flexibility in the exchange rate to reduce tensions on reserves. However, the difficulty is that the Tunisian dinar has become more volatile despite the intervention of the Central Bank on the foreign exchange market to avoid a more pronounced depreciation. It is true that the exchange rates of the Tunisian dinars were determined by the interbank market. In this market, commercial banks, including offshore banks conducted their transactions using free nego tiated rates for their resident and non-resident clients. No limit is set for the difference between the Bid-Ask spread. The Central Bank of Tunisia (BCT) intervened in the market and published bank notes as indicative, at the latest on the next day, the exchange rate for interbank foreign exchange. In other words, the dinar exchange rate is supported by the central bank’s interventions in the foreign exchange market to avoid a more pronounced depreciation. The “managed Floating exchange rate regime” has led to more volatility and persistence of shocks, then to understand the implications of VaR risk modeling, it is interesting to provide specific responses to the way of managing FX risk. The comprehension of these conditions requires a corporate structure that encourages precise assessments of foreign exchange risk exposure on one hand and the application of successful foreign exchange trading activities on the other. As a result, in this paper, we provide exchange risk management methodologies based on VaR by applying five different approaches (the CMC, the FA, the GQARCH, the MFA and the CHFM models), which can be improved to the specificity of the Tunisian FX perspective. Therefore, we compare the performance of VaR models in order to identify the best VaR, as practiced on a daily basis by major international banks, dealers and brokers.

    3.EMPIRICAL LITERATURE AND METHODOLOGY

    Many theoretical as well as empirical investigated the application of VaR using different classical and recent approaches (Parametric Approaches, Nonparametric approaches and Monte Carlo Approach…) to evaluate the FX rate risk (for instance, Akbar and Chauveau, 2009; Akhtekhane and Mohammadi, 2012; Batten et al., 2014; Fiksriyoso and Surya, 2013; Rejeb et al., 2012; Salhi et al., 2016; Tokmakçıoğlu, 2009). As can be seen, the classical VaR measures, assumed that the return distribution of financial factor risk is normally distributed, thus, the skewness and heavy tail are the two important characteristics of observed fluctuation FX Market. Provoked by these findings and in order to illustrate more light on this issue and to address the significant heterogeneity observed across the FX rates, we depart from the preview literature and employ the conditionally heteroskedastic latent factor model as modelling dependency for the risk management. Factor models have been established for various reasons: first, these models are understandable and simple to manipulate (The fault times are independent conditionally to a random factor), second they respect the financial intuition (the dependence result of a non-mutualist systemic risk) and finally they take careful into consideration the dependency of parameters risk factors. The market risk factors in this research are the most representative currencies of the Tunisian foreign debt, namely: TND versus USD, TND versus EUR and TND versus JPY. Next, the VaR is estimate with five different VaR approaches (CMC, FA, MFA, GQARCH and CHFM) and three confidence levels (1%, 2% and 5%).

    3.1.VaR Models and Parameter Estimates

    The last decade, as we can see above, several empirical studies have been developed comparing different modelling methods to identify how to evaluate the VaR. In all these studies, the basic step to make a VaR measurement for a portfolio of assets, was the reconstruction of the returns probability distribution during the holding period with a confidence level. Then, the first step in this study is to calculate the returns of the Tunisian exchange rates as follows:

    R t = ln  p t ln  p t 1 p t p t 1 p t 1

    where pt is the daily closing exchange rate at time t. This quantity can be seen as the logarithm of the geometric growth and is known in finance as continuous compounded returns.

    According to Jorion (2007), VaR is defined as: “a method of assessing risk that uses standard statistical techniques used routinely in other technical fields. Loosely VaR summarizes the worst loss over a target horizon that will not be exceeded with a given level of confidence.” So the basic parameters of this unit measurement are confidence level and holding period. The most important advantage of VaR is being prepared for potential negative outcomes and so avoids them.

    Mathematically, Jorion (2007) expressed VaR as:

    p ( L > V a R ) 1 α

    where L is the set of losses and confidence level α ∈(0,1).

    To illustrate the dynamic nature of the VaR in the Tunisian context, we used our new simulation approach based on the conditionally heteroskedastic latent factor model and the following competing statistical models:

    3.1.1.The Classical Monte Carlo Approach

    The Monte Carlo simulation is recognized as the optimal quantitative methodology for measuring the Value at Risk. Alexander (2008) showed that Monte Carlo VaR approach is very flexible and several assumptions can be attributed to the multivariate distribution of risk factor returns. This simulating method is able to detect or/and specify the possible changes in the market risk factors through the employment of a statistical distribution.

    Ben Rejeb et al. (2012), made a comparative analysis between four risk measurement models: Historical simulation, Variance Covariance, Bootstrapping and Monte Carlo simulation, to assess the foreign exchange risk in the Tunisian exchange market. In their empirical analysis, they found that VaR estimates related to three currencies (USD, EUR, JPY) from Monte Carlo simulation approach and Bootstrapping method were very similar.

    In this approach, the excess return of the portfolio at present time t will be denoted by R t p . Let us assume that R t p depends on q risk factors (foreign exchange rates). Then a Monte Carlo computation of the VaR would consist of the following steps:

    • Choose the level of confidence 1−α to which the VaR refers.

    • Simulate the evolution of the risk factors from time t to time t + 1 by generating q-tuples of pseudo random numbers from the multivariate normal distribution defined by θ and Σ, where θ is the (q ×1) vector of average excess returns over the Tunisian exchange rates, and Σ is the (q × q) variance- covariance matrix of excess returns. Label these draws y 1 t + 1 s , y 2 t + 1 s , , y q t + 1 s ( s = 1 , , N ) .

    • Calculate the N different values of the portfolio at time t + 1 using the values of the simulated q- tuples of the risk factors R t + 1 , 1 p , R t + 1 , 2 p , , R t + 1 , N p where

      R t + 1 , s p = δ 1 y 1 t + 1 s + δ 2 y 2 t + 1 s + + δ q y q t + 1 s

      and δ 1 , δ 2 , , δ q are the portfolio weights for the q assets.

    • Ignore the fraction of the α worst returns R t + 1 , s p .. The minimum of the remaining R t + 1 , s p is then the VaR of the portfolio at time t.

    3.1.2.Latent Factor Model

    Factor analysis is a statistical method for modelling the covariance structure of high dimensional data using a small number of unobservable or latent variables (McLachlan and Peel, 2000). It can be described by the following generative model:

    y t = A f t + ε t

    where

    t = 1 , , T ,

    f t N ( 0 , I k )

    is the vector of k common latent factors (where Ik, denotes the (k × k) identity) and yt is a q-dimensional observation vector. The covariance structure is captured by the factor loading matrix A. The mean of the observations is determined by the vector of specific or idiosyncratic factors modelled as a multivariate normal with mean vector θ and a diagonal covariance matrix Ψ :

    ε t N ( θ , Ψ ) ; t = 1 , , T

    The observation process is expressed as a conditional likelihood, and is given by:

    p ( y t | f t ) = N ( θ + A f t , Ψ )

    In addition, the observation likelihood is multivariate normal with mean vector θ and covariance matrix:

    Σ = A A + Ψ

    To obtain maximum likelihood and then estimate model parameters Θ ={θ , A, Ψ} we use the iterative EM algorithm (see Dempster et al., 1977; McLachlan and Krishnan, 2008).

    3.1.3.Mixture of Factor Analyzers

    Mixture of factor analyzers represents a finite mixture of linear submodels for the distribution of the vector of observed data yt given the latent factors ft. According to this approach, modelling the distribution of the observed data can be made as follows:

    y t = A j f j t + ε j t   with prob.   π j ( j = 1 , , M )

    for t = 1, …, T, where fjt is a k-dimensional (k < q) vector of common latent factors and A is the (q × k) matrix of factor loadings. The vector of common latent factors fjt is distributed N(0, Ik), independently of the vector of specific or idiosyncratic factors εjt, which is distributed N(θj, Ψj), where Ψj is a diagonal matrix (j =1, …, M). Thus, the MFA is given by

    p ( y t ; θ j , Σ j ) = j = 1 M π j N ( y t ; θ j , Σ j )

    where the j-th component-covariance matrix Σj has the form

    Σ j = A j A j + Ψ j ( j = 1 , , M )
    (11a)

    and N(yt ; θj, Σj) denotes the multivariate normal density function with mean θj and covariance matrix Σj. The parameter vector Θ now consists of the elements of the θj , the Aj , and the Ψj , along with the mixing proportions πj (j =1,⋯, M −1), on putting π M = 1 j = 1 M 1 π j .

    To estimate MFA parameters for the observed data yt it’s possible to use the multicycle Alternating Expectation Conditional Maximization (AECM) algorithm (see Meng and Van Dyk, 1997; McLachlan and Krishnan, 2008).

    3.1.4.The GQARCH Model

    Black (1976) and Christie (1982) found evidence that stock returns are negatively correlated with return volatility. This asymmetry (often related to as financial leverage or volatility feedback) means that volatility tends to rise in response to a negative shock and to fall in response to a positive shock. The GARCH model does not account for this finding and assumes symmetric impacts of positive and negative shocks on future volatility. The quadratic version of the GARCH model (GQARCH) ac counts for these asymmetries (see Engle et al., 1990; Sentana, 1995). This specification produces a symmetric curve around α

    R t p = θ + h t ε t

    where εt ~ N(0,1) and the conditional volatility ht of the portfolio is given by:

    h t = ω + α y i t 1 + γ y i t 1 2 + δ h t 1
    (13a)

    If α is negative this means that negative shocks increase the conditional volatility ht more than positive shocks. For all time t and ∀l =1, …, the anticipated volatility is given by:

    h t + l / t = ω + θ ( α + γ θ ) + ( α + γ ) h t + l 1 / t
    (14a)

    3.2.Conditionally Heteroskedastic Latent Factor Model

    The model that we propose to study supposes that excess returns depend on unobservable factors. We allow the conditional variances of the underlying factors to vary over time and parameterize this in terms of GQARCH (1, 1) processes.

    3.2.1.Model Specification and Factor Structure

    Consider the following multivariate model:

    y t = A f t + ε t
    (1)

    where

    f t = H t 1 / 2 f t * ,   for   t = 1 , , T   and

    ( f t * ε t ) N [ ( 0 θ ) ( I k 0 0 Ψ ) ]
    (2)

    where yt is the (q×1) vector of observable random variables (financial returns in our case), ft is a (k × 1) vector of unobserved common factors, A is the associated (q×k) matrix of constant factor loadings, with qk, εt is a (q×1) vector of idiosyncratic noises, which are conditionally orthogonal to ft, θ and Ψ are, respectively, the (q×1) mean vector and (q × q) diagonal and definitepositive covariance matrix of constant idiosyncratic variances, Ht is a (k × k) diagonal positive definite matrix of time-varying factor variances. In particular, it is assumed that the common factor variances follow GQARCH (1, 1) processes, so that its i-th element is

    h i t = ω i + γ i f i t 1 + α i f i t 1 2 + δ i h i t 1

    where the dynamic asymmetry parameter γi is usually different from 0, allowing for the possibility of a leverage effect (see Sentana, 1995). We can see from this model that if fit-1 > 0, its impact on hit is greater than in the case of fit-1 < 0 (assuming that γi and αi are positive). To guarantee the positivity of the conditional common variances and the covariance stationarity, we impose the constraints ω i , α i , δ i > 0 , γ i 2 4 ω i α i and α i + δ i < 1 ,    i = 1 , , k . In this case, extension to higher order GQARCH is straightforward. As a matter of fact, there is nothing particular about GQARCH models from the EM algorithm point of view. The problem is that nonquadratic ARCH models are difficult to handle via the Kalman filter (see Harvey et al., 1992).

    2.3.2.Maximum Likelihood Estimation

    2.3.2.1.Dynamic State-Space Representation

    The conditionally heteroskedastic latent factor model in (1)-(2) can be regarded as a random field with indices i = 1, …, q and t = 1, …, T. Therefore, it is not surprising that it has a time-series state-space representation, with ft as the state variable. The measurement and transition equations are given by:

    { y t = A f t + ε t M e a s u r e m e n t e q u a t i o n f t = 0 f t 1 + f t T r a n s i t i o n e q u a t i o n

    where ε t | Y t 1 , F t 1 N ( θ , Ψ ) , and f t | Y t 1 , F t 1 N ( 0 , H t ) . In this case Y t 1 = { y t 1 ,    y t 2 , } and F t 1 = { f t 1 ,    f t 2 , } , i.e. the information set Dt−1 that we would have at time t − 1. Hence the prediction equations are: E ( f t | D t 1 ) = f t | t 1 = 0 , E ( y t | D t 1 ) = y t | t 1 = θ and V a r ( f i t | D t 1 ) = h i t | t 1 where

    h i t | t 1 = ω i + γ i f i t 1 | t 1 + α i ( f i t 1 | t 1 2 + h i t 1 | t 1 ) + δ i h i t 1 | 2

    f i t 1 | t 1 = E ( f i t 1 | D t 1 ) , h i t | t 1 the i-th diagonal element of H t | t 1 Ht|t and h i t 1 | t 1 = V a r ( f i t 1 | D t 1 ) is the i-th diagonal element of H t 1 | t 1 . Hence, the predicted variance is the conditional expectation of the predicted volatility ( E ( h i t | D t 1 ) ) . The term h i t 1 | t 1 comes from the fact that E ( f i t 1 2 | D t 1 ) = V a r ( f i t 1 | D t 1 ) + E ( f i t 1 | D t 1 ) 2 = h i t 1 | t 1 + f i t 1 | t 1 2 . This specification can be easily evaluated via the Kalman filter. Note that the measurability of hit-1 with respect to Dt−1 is achieved in this model by replacing the unobserved factors by their best (in the conditional mean square sense) estimates, and including a correction in the standard ARCH terms which reflects the uncertainty in the factor estimates. The following are the updating equations:

    f t | t = f t | t 1 + H t | t 1 A t | t 1 1 ( y t θ A f t | t 1 )

    and

    H t | t = H t | t 1 H t | t 1 A t | t 1 1 A H t | t 1

    where t | t 1 = A H t | t 1 A + Ψ . Given the degenerate nature of the (time-series) transition equation, smoothing is unnecessary in this case, so that f t | T = f t | t and H t | T = H t | t .

    2.3.2.2.The EM algorithm

    Actually, this algorithm could still be applied to our model if we knew the conditional variance parameters, even though they are not zero. To see why, assume temporarily that the ft’s are observed. Under normality, we have from (1)-(2) that:

    y t | f t , F t 1 N ( Δ y ˜ t , Ψ )

    where Δ = [ θ | A ] is the q × (k + 1) matrix of “regression” parameters, and y ˜ t = [ 1 | f t ] . Therefore, the density function of the t-th observation, conditional on the information “available” at time t, can be factorized as:

    p ( y t , f t | Y t 1 , F t 1 ) = p ( y t | f t , Y t 1 , F t 1 ) p ( f t | Y t 1 , F t 1 )

    Hence, ignoring initial conditions, and assuming the Ψ has full rank, the joint log-likelihood function would be given by:

    L ( Θ | y , f ) = T q 2 log  2 π 1 2 t = 1 T log | Ψ | 1 2 t = 1 T t r [ Ψ 1 ( y t Δ y ˜ t ) ( y t Δ y ˜ t ) ] 1 2 i = 1 k [ t = 1 T log  h i t + t = 1 T f i t 2 h i t ]

    when the ft’s are unobservable, we can apply the EM algorithm by taking the expected value of the above likelihood function with respect to Yt and the current parameter estimates (E-step: E ( L ( Θ | y , f ) | Θ ( i ) ) = Q ( Θ | Θ ( i ) ) ), and then maximize by solving the first order conditions (M-step).

    E-Step:

    If we let D T i = { Y T , Θ ( i ) } , then the conditional expected value of the complete log-likelihood function is given by:

    L ( Θ | Θ ( i ) ) = c 1 2 t = 1 T log | Ψ | 1 2 t = 1 T t r [ ( y t Δ y ˜ t ) ( y t Δ y ˜ t ) | D T i ] 1 2 j = 1 k t = 1 T E [ log  h j t + f j t 2 h j t | D T i ]
    (3)

    M-Step:

    In this step, the maximization of the function Q ( Θ | Θ ( i ) ) with respect to Δ and Ψ can be done ignoring the last two terms. Provided that these parameters increase the whole expression, the generalized EM principle still applies. By annulling the first order conditions we find:

    Δ ( i + 1 ) = [ t = 1 T y t E ( y ˜ t | D T i ) ] [ t = 1 T E ( y ˜ t y | D T i ) ] 1
    (4)

    and

    Ψ ( i + 1 ) = 1 T t = 1 T E [ ( y t Δ y ˜ t ) ( y t Δ y ˜ t ) | D T i ]
    (5)

    Hence, we just need to find the conditional expectations in (4) and (5). These conditional expectations can be derived using the Kalman filter. If f t = H t | t 1 1 / 2 f t * , the model would be conditionally Gaussian and the Kalman filter produces the exact conditional expectations. However, if Ht is a function of unobserved variables, as in (1)-(2), the filter only produces approximate values. In this case, if we let E ( y ˜ t | D t ) = y ˜ t | t ( i ) = ( 1 , f t | t ( i ) ) and E ( y ˜ t y ˜ t | D t ) = Ω t | t ( i ) we get:

    Ω t | t ( i ) = E [ ( 1 f t f t f t f t ) | D t ] = ( 1 f t | t ( i ) f t | t ( i ) H t | t ( i ) + f t | t ( i ) f t | t ( i ) )

    therefore

    Δ ( i + 1 ) = [ t = 1 T y t y ˜ t | t ( i ) ] [ t = 1 T Ω t | t ( i ) ] 1

    and by using this equation, we determine Ψ(i+1),

    Ψ ( i + 1 ) = 1 T t = 1 T [ y t y t Δ ( i + 1 ) y ˜ t | t ( i ) y t ]

    These equations reduce to the usual ones when the variance parameters are zero.

    3.2.2.3.Implementation Details

    Unfortunately, the conditional variance parameters ω, γ, α and δ are in practice unknown. The most obvious possibility is to apply the EM algorithm to estimate these as well. However, this is not easy. First, il follows from (3) that conditional expectations of nonlinear functions of ft are needed. Unfortunately, these expectations are practically impossible to be found exactly. Second, and more importantly, the first order conditions for the GQARCH parameters result in a very complicated simultaneous equation system with no closed form solution.

    An alternative possibility is based on the following idea. We assume that the parameters in Δ and Ψ are known, or else, that they are kept constant at their values from the previous iteration so that effectively the first part of the above log-likelihood function is constant. Taking conditional expectations of (3) we have:

    Q ( Θ | Θ ( i ) ) = c * 1 2 i = 1 k t = 1 T E ( log  h i t + f i t 2 h i t | Y T , Θ ( i ) )

    One problem with this expression is that in general there are no simple exact formulae for the expected value of nonlinear functions of ft. However, one can approximate the above expectations by ignoring Jensen’s inequality effects as:

    Q ( Θ | Θ ( i ) ) = c * 1 2 i = 1 k t = 1 T ( log [ E ( h i t | D T i ) ] + E ( f i t 2 | D T i ) E ( h i t | D T i ) ) = c * 1 2 i = 1 k t = 1 T ( log  h i t | t 1 ( i ) + f i t | t ( i ) 2 + h i t | t ( i ) h i t | t 1 ( i ) )

    where h j t | t 1 ( i ) = ω j + γ j f j t 1 | t 1 ( i ) + α j ( f j t 1 | t 1 ( i ) 2 + h j t 1 | t 1 ( i ) ) + δ j h j t 1 | t 2 ( i ) , h j t | t ( i ) is the j-th diagonal element of Ht|t and f j t | t ( i ) is the j-th element of ft|t, both evaluated at the (i)-th iteration. Then, one could find ω j ( i + 1 ) , γ j ( i + 1 ) , α j ( j + 1 ) and  δ j ( i + 1 ) by iteratively maximizing this approximation to the expected log-likelihood function. However, since we are not dealing with the exact expression for the expected loglikelihood function, it is not clear that L(Θ|Y) is going to be maximized in this way. In the case where the data generating process is f t = H t | t 1 1 / 2 f t * , then the log-likelihood function would be given by:

    L ( Θ | y , f ) = c 1 2 t = 1 T log | Ψ | 1 2 t = 1 T ( y t Δ y ˜ t ) Ψ 1 ( y t Δ y ˜ t ) 1 2 i = 1 k t = 1 T [ log  h i t | t 1 + f i t 2 h i t | t 1 ]

    Treating once more the parameters Δ and Ψ as known, and taking conditional expectations of (3) we get:

    Q ( Θ | Θ ( i ) ) = c * 1 2 i = 1 k t = 1 T ( log  h i t | t 1 + E ( f i t 2 | D T i ) h i t | t 1 ) = c * 1 2 i = 1 k t = 1 T ( log  h i t | t 1 + ( f i t | t ( i ) 2 + f i t | t ( i ) ) h i t | t 1 )

    Now ω j ( i + 1 ) , γ j ( i + 1 ) , α j ( j + 1 ) and  δ j ( i + 1 ) can be obtained by numerical maximization of the above exact expected log-likelihood function, but notice that fjt−1|t−1 and hjt−1|t−1 have to be re-evaluated once per parameter per iteration. Hence, the Kalman filter has to be used as often as if we were to maximize the log-likelihood function of the observable L(Θ|Y) directly.

    3.2.2.4.Monte Carlo Simulations

    The basic problem of this study, is determining VaR for a portfolio of exchange rates via Monte Carlo simulations. The latter aims at generating risk measures through a statistical model. Our contribution is that this simulating method uses CHFM to generate different scenarios for the risk factors and combine these scenarios to generate correlated and heterogeneous future returns. The return of the portfolio at present time t will be denoted by R t p . Let us assume that R t p depends on q risk factors, then the main steps for doing this VaR estimation are as follows:

    4.EVALUATION MODELS: BACKTESTING

    These methods provided solutions for the comparison between the different approaches (Nieppola, 2009). These methods of Backtesting are analysed and estimated with special attention. They compared the Out-of-Sample VaR estimates with Christoffersen tests (Kupiec Test, Independence test and Joint test). Then, a principal function is reported on a binary loss function that treats any loss larger than the VaR estimate as an ‘exception.’ In this case, it’s possible to see whether failure rates are in accordance with selected the confidence level (Miletic and Miletic, 2015) through the Unconditional Coverage test. In addition, an accurate VaR model shows the number of exceptions that are independent all the time: Independent test (Evers and Rohde, 2014). Finally, we examine both features of Conditionality and movements in data all the time through the joint test named Conditional Coverage test (Jorion, 2007).

    4.1.Kupiec Test or PoF Test

    Kupiec test also named the Proportion of Failure (PoF), it examines the unconditional coverage feature (Kupiec, 1995). In this test, the main parameters are n (number of exceptions) and T (total number of observations) to quantify α ^ (observed proportion of failures: α ^ = n / T as well as the confidence level α (expected proportion of failure). The null hypothesis is that the observed probability of exception occurring is equal to the expected.

    H 0 : α = α ^

    The PoF test statistic is calculated by the likelihood Ratio (LRPoF), the latter function can be written as:(6)

    L R P o F = 2 ln [ α n ( 1 α ) T n α ^ n ( 1 α ^ ) T n ] χ 2
    (6)

    4.2.Independence Test

    It takes into account the independence of exceptions. The independence test statistic is evaluated by the likelihood Ratio (LRind)(7)

    L R i n d = 2 ln [ ( 1 π ) n 00 + n 10 π n 01 + n 11 ( 1 π 0 ) n 00 π 0 n 01 ( 1 π 1 ) n 10 π 1 n 11 ] χ 2
    (7)

    The null hypothesis is that the probability of exception occurring is independent on the information whether the exception has occurred also previous day:

    H 0 : π 0 = π 1

    where πi is the probability of having a failure conditional on state i and on previous day and nij (i = 0,1; j = 0,1) is the number of days in which j is achieved in one day, however i was at the previous day.

    π 0 = n 01 n 00 + n 01 π 1 = n 11 n 10 + n 11 π = ( n 01 + n 11 ) ( n 00 + n 01 ) + ( n 10 + n 11 )

    4.3.Conditional Coverage Test (Joint Test)

    According to this statistical test, an accurate VaR model includes both unconditional coverage and independence between exceptions (Christoffersen, 1998).(8)

    LR cc = LR PoF + LR ind χ 2
    (8)

    5.THE DATASET: TUNISIAN FOREIGN EXCHANGE RATES

    5.1.Data Description

    The models presented in the empirical literature are applied to a portfolio composed by the most representative currencies, in the same approach for the Tunisian context, we have opted for the most three foreign rates treated into the Tunisian FX, namely: TND/USD, TND/ EUR and TND/JPY. The data set contains 2,251 daily exchange rates from January 08, 2008 to December 30, 2016. Our sample consists of a long data that includes periods of sensible fluctuation and thus enables us to examine how the CHFM approach perform during such periods. We have opted for daily sampling frequencies. The exchange rate series were extracted from an historical exchange database provided by the Tunisian central bank and FX database. In order, to evaluate VaR models, exchange rates are transformed into log-returns.

    As can be seen, Figure 1, illustrates the movements of log-returns for exchange rates. In this case, it becomes very clear that the Tunisian currency returns changes as its volatility changes. Therefore, higher periods of volatility are followed by higher periods of low returns and vice versa, also we detect the presence of a co-movement between the different FX rates. This relationship is relatively apparent during the observation period and it is the central fact of our investigation into the Tunisian FX market. We see that the behaviour of log-returns for Tunisian currencies is highly volatile between 2008 and at the beginning of 2009. As mentioned above, this period is a considered as a transitional step to a managed floating exchange rate regime. Such period of transition is characterized by a series of significant changes that lead to significant volatility and repetitive shocks.

    5.2.Descriptive Statistics

    In this part, we describe statistical features of logreturns related to exchange rates using descriptive statistics, which are presented in Table 1:

    The table demonstrates that log-return series of exchange rates have positive mean daily returns. Then, the returns of TND/EUR and TND/JPY were positively skewed, but they are negative for the TND/USD. The null hypothesis for skewness coefficients that conform to a normal distribution’s value of zero has been rejected at the 5% significance level; negative skewness indicates that the distribution has a long left tail, which indicates a high probability of observing large negative values. In addition, the returns for all currencies also exhibit excess kurtosis, particularly for TND/USD and TND/JPY.

    The null hypothesis for kurtosis coefficients that conform to the normal value of three is rejected for all exchange rates; which is one of the Tunisian FX market features that exhibit important kurtosis. According to Jarque- Béra normality test, the null hypothesis of normality is rejected (for 95% significance level, critical value is 5.9668).

    In Figure 2 we depict the histograms of individual time series. For each histogram, we also superimpose the normal density function with the same mean and variance. Also plotted in Figure 2 are the normal probability plots for the three returns. The purpose of a normal probability plot is to graphically assess whether the data could come from a normal distribution. If the data are normal the plot will be linear. Other distribution types will introduce curvature in the plot. It is clear from this figure that the returns within the given holding periods are not normally distributed. Especially the tails of the return distributions are heavier than those of the normal distribution, which is highlighted explicitly in the normal probability plots: the left tail (red points) is above (larger) the blue line, and the right tail (red points) is below (negatively larger) the blue line.

    5.3.Tests of Stationarity

    We apply statistical tests to confirm the stationarity property for our log return series, namely: KPSS (Kwiatkowski- Phillips-Schmidt-Shin, 1992) tests: aims at testing the null hypothesis that observable time series is stationary, and PP (Phillips and Perron, 1988) test: used to test the null hypothesis of unit root (Non stationary time series).

    Table 2, illustrates the results of KPSS test for the logarithmic returns for which the null hypothesis could not be rejected. However, when comparing PP statistics as specified in the same table with critical value of 5% level, the null hypothesis of unit root could be rejected. By these two statistical tests, we conclude that log-returns of Tunisian currencies are stationary. All these results highlight the usefulness of CHFM, which take into account the stationarity, and the heteroskedasticity and asymmetric return distributions of the Tunisian currencies.

    5.4.Correlation Analysis

    In order to explain the interdependence between movements of the FX rates, we use correlation coefficient. The results from 2008 to 2016 can be seen in Tables 3:

    TND/USD and TND/EUR have a strong negative correlation (-0.5772), this coefficient demonstrates that these two FX rates are not behaving similarly. In other words, this can be expressed as follows: TND appreciates versus USD as well as depreciates versus EUR and vice versa. Therefore, the gain obtained from one currency (TND/USD) will recover the loss of the other currency (TND/EUR). Since the correlation coefficient of TND/ JPY and TND/EUR (-0.4256), we understand the opposite movements of the Euro and JPY currencies. Finally, TND/USD and TND/JPY have a quite positive correlation coefficient (0.5239), one FX rates increases simultaneously with the other.

    5.5.Empirical Results: Identification of the Best VaR Model

    The objective of the latest part is to identify the most appropriate approach that can be adopted to forecast the VaR for the portfolio of Tunisian FX rates. However, we need first to examine the performance of VaR models by applying the method of rolling sample. To estimate the VaR, we divide dataset of Tunisian FX rates returns into two Parts: In Sample, in which the estimated period is from 08/01/2008 until 05/01/2009, (250 observations) and the Out-of-Sample-also called the test period- that begins on 06/01/2009 and ends on 30/12/2016 (2000 observations). To establish the VaR number of the public Tunisian external debt portfolio for a given confidence level (1−α ), we used the following portfolio weights: δ1 = 15% (for the TND/USD), δ2 =75% (for the TND/EUR) and δ3 = 10% (for the TND/JPY).

    The main difficulty in using Monte Carlo simulations for VaR inference is the amount of time that takes to compute correct estimates, especially, when the portfolio consists of many risk factors and/or when the confidence level is high. To test the dependence of the numerical results on the number of Monte Carlo steps, we did this investigation for various numbers of Monte Carlo steps ranging from 100 to 2000. For the Tunisian public debt portfolio, we present our results of the underestimated losses, obtained from a CMC simulation on the one hand and from Monte Carlo simulations using FA, GQARCH, MFA and CHFM approaches, in Figure 3. From this figure, it seems that when a number of Monte Carlo steps of about 500 is reached, the resulting VaR do not change significantly under any additional increase of the number of Monte Carlo steps. This behaviour can be observed for the CHFM approach and for each confidence level.

    Therefore, with a rolling sample, we estimated parameters of VaR models (CMC, FA, MFA, GQARCH (1, 1) and CHFM) using 500 simulations for the different risk levels. The next step aims to check the stability and reliability of the results over time through the Backtesting procedure. In this case we apply Kupiec’s PoF test, (Unconditional Coverage test), Independence test and Conditional Coverage test. The VaR numbers derived from the three approaches present a wide range of consequences. The results of the Backtesting are presented in the Table 4, Table 5, Table 6 and Table 7.

    Moreover, we can see in Figure 4 a faster volatility in 99%, 98%, 95% and 90% VaR estimates for the CHFM and MFA models than CMC, GQARCH (1, 1) and FA models. In addition, CMC, GQARCH and FA methods gives VaR estimates lower than CHFM and MFA approaches.

    Evidently, VaR through CHFM and MFA using Monte Carlo Simulation shows improvement in management of exchange risk exposure compared to the CMC, FA and GQARCH (1, 1) approaches over time. From year 2009 to 2016, as expected the CMC method has poor results. Indeed, this approach underestimated risk, it presented poor results for the different risk levels 1%, 2%, 5% and 10% using the Christoffersen test (the test for conditional coverage was respectively 32.215, 24.418, 19.229 and 17.997).

    The VaR through CHFM has suffered due to the fluctuation of the exchange rates of three currencies. Consequently, the results conclude in favour to reject such models credibility regarding its poor significant level. It is noticeable from the FX series that the VaR are affected by a significant fluctuation of the volatility (see Figure 4). Thus, the main disadvantage of this method is its lack of reactivity, which suggest in part, the use of additional modelling structures that incorporates the interaction between the three risk factors (TND/USD, TND/EUR, TND/ JPY). Such dependence can be represented by more adequate approaches that can estimate the Value-at-Risk in spite of the assumption of heteroskedasticity of the risk factors. We know that FA method explains the correlation between the variables observed from a minimum number of factors but it does not take into account the heterogeneity and conditional heteroskedasticity in the volatility of the different currencies. As noted before, CHFM and MFA using Monte Carlo simulations are more suitable to construct the joint multivariate distribution of losses and are more flexible and realistic in terms of allowing a wide range of dependence structure.

    This improvement in management of FX exchange risk was possibly mainly due to the changing of the FX regimes over time. Evidently, we have validated, for the three levels that CHFM and MFA methods adjusted rapidly compared to the other competing methods (see Figure 4), in order to better estimate the fluctuation of the volatility for the Out-of-Sample period and to detect sufficiently the fluctuation of the VaR. The first observation of Table 4, Table 5 and Table 6, and Figure 4, shows that the VaR violations fluctuated with the volatility of the respective period. As mentioned earlier these models are able to capture adequately some particular characteristics of the portfolio series such as changes in volatility, heterogeneity and dependency.

    It is noticed from the results that the detection of the first violation is improved. In fact, we saw that for the 10% risk level, both CHFM and MFA models capture better losses on the 4-th day compared to the CMC approach 11 day, FA model 9-th day and GQARCH (1, 1) model 9 day. However, for the level of 1% as well as 2% it is the 13-th day. For a 1% coverage rate, the results are similar for the CHFM, GQARCH (1, 1) and MFA methods, more exactly the LRCC test gives the same significant test 1.1853 and the identical Failure rate, which is estimated to 1.15 %. More precisely, the two VaR sequences (obtained from the CHFM and MFA methods) are too similar for the backtests to discriminate between them.

    From Table 5 and Table 6, the tests conclude to the validity of both risk measures 1% and 2% for the CHFM model as well as for the MFA method. However, the results vary severely from one specification to another; more precisely, it is possible to give affirmative conclusions for the portfolio that CHFM gives the best results in terms of Backtesting. Indeed, the unconditional and conditional coverage tests give better results than the MFA model.

    Table 5 shows that CHFM model perform better for the FX portfolio at the level 2% as the failure rate has been 2.05%. The CHFM model appears to be remarkably accurate in that case compared to the MFA approach. Indeed, this approach provides accurate VaR forecasts for this portfolio at the confidence levels of 98%. Moreover, it has approved the two tests for “unconditional coverage” and for “independence” respectively (0.0923) and (0.0000), implying a significant Christoffersen Test (1.2047) which is inferior to χ2 (5.991).

    Table 6 and Table 7 confirm that results were linked to the previous findings in the risk level 2%. In fact, we found that CHFM model performs better the forecast for the FX portfolio for the risk levels 5% and 10% than the MFA model, because we have more significant LRCC (1.1251; 1.626) compared to other method with LRCC (1.8083; 1.8307).

    Finally, we can compare the Backtesting results using the following quantity:

    S i = 1 4 ( α ^ i α i α i ) 2

    where ( α ^ i ) are the failure rates for each method. In this case, the best model is the one who have the smallest value S . Hence, using the quantity S we can classify our VaR models. It seems from Table 8 that CHFM is the most accurate and consistent approach to forecast correctly the VaR for currency risk, which means according to Table 8 that it’s observed proportion of failure α ^ is very close to expected proportion of failure α (1%, 2%, 5% and 10%). MFA method is the second one, and finally we found that CMC approach has the biggest S.

    As a consequence, the comparison between different models with each specification shows that, according to the different measures used for the performance of failure rates forecast and then the Backtesting of the Value at Risk, the CHFM and MFA approaches provide the best Out-of-Sample estimation for the risk levels 1%, 2%, 5% and 10% for the Tunisian FX market. CHFM is ranked in the first position as the results demonstrate the presence of the lower exceptions and the more significant tests, for the different risk levels than the other competitor models. The MFA was ranked in the second position.

    6.CONCLUSION

    Modeling of financial time series is clearly difficult but not less important part of financial risk management. The difficulties of modeling are caused by the specific characteristics of financial time series, such as heterogeneity, fat tails, conditional heteroskedasticity, volatility clustering and dependency, which cannot be easily modeled. In this paper a new methodology based on the conditionally heteroskedastic latent factor model were proposed and backtested on the Tunisian public debt portfolio. Assuming risk levels of 1%, 2%, 5% and 10%, we have calculated the VaRs for this portfolio on the basis of 250 banking days, using the Classical CMC, FA, MFA, the GQARCH (1, 1) model and our proposed CHFM model. The computations and the corresponding Backtesting of the results have been performed on the basis of historical foreign exchange rates ranging over eight years. This means that our Backtesting statistics are based on 2000 measurements.

    More precisely, to forecast VaR though these approaches, analysis is conducted on a test period from 06/01/2009 to 30/12/2016. We proved that CHFM and MFA models give adequate VaR estimates and were the most accurate in assessing the Tunisian currency risk. Our results showed that the CHFM and MFA (particularly CHFM) are ranked in the first position to perform an analysis of the Value at Risk for the 1%, 2%, 5% and 10% risk levels. Such models perform better than other approaches as they give a significant capital allocation and a well estimation of exceptions. This finding is tested empirically using Backtesting techniques under different tests with three level of risk. We notice also, that the CMC and FA models give statistically insignificant estimations, it lacking the property of “correct conditional coverage” capital, thus, the results concluded in favor to reject such models credibility considering its poor significant level.

    To be able to decide whether or not one should prefer the conditionally heteroskedastic latent factor model to the traditional models, a supplementary investigation of the speed of the proposed algorithm would be useful. As we considered three risk factors only, a natural extension of studies in this field would have to include four or more risk factors. Furthermore, our model can be generalized to one where one allows the specific factors to be stochastic functions of time. By combining the conditionally heteroskedastic latent factor model with hidden Markov chain models, we can derive a dynamical local model for segmentation and prediction of multivariate conditionally heteroscedastic financial time series (see, for instance, Saidane and Lavergne, 2007, 2008, 2009, 2013). The study of such models would provide a further step in the extension of hidden Markov models to mixed conditionally heteroscedastic latent factor models and allow for further flexibility in the market risk analysis and value-atrisk applications.

    Figure

    IEMS-16-400_A1.gif
    IEMS-16-400_F1.gif

    Real daily observed exchange rates and their returns from 08-01-2008 to 30-12-2016.

    IEMS-16-400_F2.gif

    Histograms (the top panels) and the Normal probability plots (the bottom panels) of the daily, log return series from 08-01-2008 to 30-12-2016. The normal density with the same mean and variance are superimposed on the histogram plots.

    IEMS-16-400_F3.gif

    Backtesting results of the Tunisian public debt portfolio, using the classical method, the GQARCH (1, 1) and the factorial approaches for different quantiles and various numbers of Monte Carlo steps.

    IEMS-16-400_F4.gif

    Backtesting results of the Tunisian public debt portfolio, using the CMC (pink), FA (yellow), GQARCH (1, 1) (green), MFA (red) and the CHFM (blue) methods for quantiles α = 1%, 2%, 5% and 10% and 500 Monte Carlo steps.

    Table

    Descriptive statistics of daily log-returns of Tunisian exchange rates

    Tests for stationarity

    The critical values at 5% level are equal to 0.1460 for the KPSS test and -1.9416 for PP test.

    Correlation matrix for daily foreign exchange rates log-returns

    Model Evaluation for VaRα = 1%

    Model Evaluation for VaRα = 2%

    Model Evaluation for VaRα = 5%

    Model Evaluation for VaRα = 10%

    Failure rates for different confidence levels α

    REFERENCES

    1. Akbar F. , Chauveau T. (2009) Exchange rate risk exposure related to public debt portfolio of pakistan: Application of value-at-risk approaches , SBP Research Bulletin, Vol.5 (2) ; pp.15-33
    2. Akhtekhane S.S. , Mohammadi P. (2012) Measuring exchange rate fluctuations risk using the value at risk , Journal of Applied Finance & Banking, Vol.2 (3) ; pp.65-79
    3. Alexander C. (2008) Market Risk Analysis: Value at Risk Models, John Wiley & Sons Ltd, Vol.Vol.4
    4. Aron J. , Macdonald R. , Muellbauer J. (2014) Exchange rate pass-through in developing and emerging markets: A survey of conceptual, methodological and policy issues, and selected empirical findings , J. Dev. Stud, Vol.50 (1) ; pp.101-143
    5. (2009) International framework for liquidity risk measurement, standards and monitoring: Consultative document,December 2009 , Available from: http://www.bis.org/publ/bcbs165.pdf
    6. Batten J.A. , Kinateder H. , Wagner N. (2014) Multifractality and value-at-risk forecasting of exchange rates , Physica A, Vol.401 ; pp.71-81
    7. Berkowitz J. , O’Brien J. (2002) How Accurate are Value-at-Risk Models at Commercial Banks? , Finance, Vol.57 (3) ; pp.1093-1111
    8. Black F. (1976) Studies of stock market volatility changes , Proceedings of the American Statistical Association,
    9. Carol A. , Sheedy E. (2008) Developing a stress testing framework based on market risk models , J. Bank. Finance, Vol.32 (10) ; pp.2220-2236
    10. Christie A.A. (1982) The stochastic behavior of common stock variances: Value, leverage and interest rate effects , J. Financ. Econ, Vol.10 (4) ; pp.407-432
    11. Christoffersen P. (1998) Evaluating interval forecasts , Int. Econ. Rev, Vol.39 (4) ; pp.841-862
    12. Corkalo S. (2011) Comparison of value at risk approaches on a stock portfolio , Croatian Operational Research Review, Vol.2 (1) ; pp.81-90
    13. Dempster A.P. , Laird N.M. , Rubin D.B. (1977) Maximum likelihood from incomplete data via the em algorithm , J. R. Stat. Soc. B, Vol.39 (1) ; pp.1-38
    14. Engle R. , Ng V.K. , Rothschild M. (1990) Asset pricing with a factor-arch structure: Empirical estimates for treasury bills , J. Econom, Vol.45 (1-2) ; pp.213-237
    15. Evers C. , Rohde J. (2014) Model Risk in Backtesting Risk Measures , Working Paper,
    16. Fiksriyoso N. , Surya B.A. (2013) Application of value at risk for managing portfolio currencies of transaction exposure: A case study of trade Payables in PT. United Tractors, Tbk , Indonesian Journal of Business Administration, Vol.2 (8) ; pp.933-948
    17. Harvey A. , Ruiz E. , Sentana E. (1992) Unobserved component time series models with ARCH disturbances , J. Econom, Vol.52 (1-2) ; pp.129-157
    18. Jorion P. (2007) Financial Risk Manager Handbook, John Wiley & Sons, Inc.,
    19. Kupiec P. (1995) Techniques for verifying the accuracy of risk management models , J. Deriv, Vol.3 (2) ; pp.73-84
    20. Kwiatkowski D. , Phillips P. , Schmidt P. , Shin Y. (1992) Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? , J. Econom, Vol.54 (1-3) ; pp.159-178
    21. Linsmeier T.J. , Pearson N.D. (1996) Risk Measurement: An Introduction to Value at Risk , Office for Futures and Options Research Working Paper No. 96-04,
    22. McLachlan G. , Krishnan T. (2008) The EM Algorithm and Extensions, John Wiley & Sons, Inc.,
    23. McLachlan G. , Peel D. (2000) Finite Mixture Models, John Wiley & Sons, Inc.,
    24. Meng X.L. , van Dyk D.A. (1997) The EM algorithm an old folk song sung to a fast new tune , J. R. Stat. Soc. B, Vol.59 (3) ; pp.511-567
    25. Miletic M. , Miletic S. (2015) Performance of value at risk models in the midst of the global financial crisis in selected CEE emerging capital markets , Economic Research-Ekonomska Istraživanja, Vol.28 (1) ; pp.132-166
    26. Nieppola O. (2009) Backtesting Value-at-Risk Models, Unpublished Master’s Thesis, Helsinki School of Economics,
    27. Phillips P.C. , Perron P. (1988) Testing for a unit root in time series regression , Biometrika, Vol.75 (2) ; pp.335-346
    28. Rejeb A.B. , Salha O.B. , Rejeb J.B. (2012) Valueat- risk analysis for the tunisian currency market: A comparative study , International Journal of Economics and Financial Issues, Vol.2 (2) ; pp.110-125
    29. Saidane M. , Lavergne C. (2007) Conditionally heteroscedastic factorial HMMs for time series in finance , Appl. Stochastic Models Data Anal, Vol.23 (6) ; pp.503-529
    30. Saidane M. , Lavergne C. (2008) An EM-based viterbi approximation algorithm for mixed-state latent factor models , Commun. Stat. Theory Methods, Vol.37 (17) ; pp.2795-2814
    31. Saidane M. , Lavergne C. (2009) Optimal prediction with conditionally heteroskedastic factor analysed hidden markov models , Comput. Econ, Vol.34 (4) ; pp.323-364
    32. Saidane M. , Lavergne C. (2013) Generalized linear factor models: A new local em estimation algorithm , Commun. Stat. Theory Methods, Vol.42 (16) ; pp.2944-2958
    33. Salhi K. , Deaconu M. , Lejay A. , Champagnat N. , Navet N. (2016) Regime switching model for financial data: Empirical risk analysis , Physica A, Vol.461 ; pp.148-157
    34. Samson L. (2013) Asset prices and exchange risk: Empirical evidence from Canada , Res. Int. Bus. Finance, Vol.28 ; pp.35-44
    35. Sentana E. (1995) Quadratic ARCH models , Rev. Econ. Stud, Vol.62 (4) ; pp.639-661
    36. Stavroyiannis S. , Zarangas L. (2013) Out of sample value-at-risk and backtesting with the standardized pearson type-IV skewed distribution , Panoeconomicus, Vol.60 (2) ; pp.231-247
    37. Tokmakçıoğlu K. (2009) The Measurement of Currency Risk: Comparison of Two Turkish Firms in the Turkish Leather Industry ,