Friday, November 29, 2013

Testing AutoRegressive Conditional Heteroskedasticity (ARCH) Effect within R

Even though title as it self "AutoRegressive Conditional Heteroskedasticity (ARCH)" sounds scary, base idea is simple: conditional volatility (heteroskedasticity) of a time series is self (auto) regressive (ARMA (p,q)), in other words volatility is not constant over time:
$$ \sigma_t^2=\alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \cdots + \alpha_q \epsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_i \epsilon_{t-i}^2 $$

In this text, i explain how to test if a time series has ARCH effect. In above formula, if null hypothesis is chosen as $ \alpha_t=0 $, we can conclude it has ARCH effect if null hypothesis is rejected. Box.test command in R can be used for this purpose. Note Box.test computes Ljung–Box test statistic for examining the null hypothesis of independence in a given time series. Below we analyse log return of Intel from 1973 to 2008.

>data=read.table("http://www.mif.vu.lt/~rlapinskas/DUOMENYS/Tsay_fts3/m-intc7308.txt",header=T)
>ret=log(data[,2]+1)
# lets find out lag 
>a=pacf(ret)
>summary(a)
       Length Class  Mode     
acf    26     -none- numeric  
type    1     -none- character
n.used  1     -none- numeric  
lag    26     -none- numeric  
series  1     -none- character
snames  0     -none- NULL  
# now test if there is a serial correlation
>Box.test(ret,lag=26,type='Ljung')

 Box-Ljung test

data:  ret
X-squared = 37.0152, df = 26, p-value = 0.07452

## As we can not reject the null hypothesis (independence) , we assume there is no serial correlation. 
## So we can now test if variance is constant or not.
> var=(ret-mean(ret))^2
> Box.test(var,lag=26,type='Ljung')

 Box-Ljung test

data:  var
X-squared = 104.7286, df = 26, p-value = 2.073e-11
Box.test shows we can reject the null hypothesis (independence) on variance, so it has significant serial correlation, in other words ARCH effect.

Friday, November 22, 2013

Java Garbage Collection Log File Scraper

'''
 
Created by Deniz Turan
 
Oracle Java (tested on 7), young generation garbage collection activity scraper. 
Extracts following fields from GC log file and save to a csv file.

Count,LogTime,logGCOffsetTime,logGCOffsetTime2, 
YGPreSize,YGPostSize,YGTotalSize, YGElapsedTime,     # Young generation
OLDPreSize,OLDPostSize,OLDTotalSize,OLDElapsedTime   # Old generation 

 
'''
from subprocess import call
import glob
import os
 
logDir="C:\\temp\\gc\\"
finalResultFileName=logDir+"finalResults.csv"
filterExtension="*.log";
 
def getLogFileList(search_dir):
        files = filter(os.path.isfile, glob.glob(search_dir + filterExtension))
        files.sort(key=lambda x: os.path.getmtime(x))       
        return files
 
def openResultFile():
    print "Creating result file : %s"% (finalResultFileName)
    # remove previous file
    call("rm "+finalResultFileName,shell=True)
    resultFileFD = open( finalResultFileName ,"a")
    ## create header
    resultFileFD.write("Count,LogTime,logGCOffsetTime,logGCOffsetTime2,")
    resultFileFD.write("YGPreSize,YGPostSize,YGTotalSize, YGElapsedTime,")
    resultFileFD.write("OLDPreSize,OLDPostSize,OLDTotalSize,OLDElapsedTime\n")
    return resultFileFD
       
def closeResultFile(resultFileFD):   
    print "Closing result file "
    resultFileFD.close();
 
def getFieldValue(strVal):
    index=strVal.index("K")
    index2=strVal.index("K", index+1)
    index3=strVal.index("K", index2+1)
    
    part1=strVal[:index]
    part2=strVal[index+3:index2]
    part3=strVal[index2+2:index3]
    return (part1,part2,part3)
 
#####################################################
# Main
#####################################################
if __name__ == '__main__':
    # prepare result file   
    resultFileFD=openResultFile ()
    
    print "Started to process log files"
    logFileList=getLogFileList(logDir)
    count=0
    for f in logFileList:
        print "Processing GC Log file %s"%f
        logFD = open(f)
        line = logFD.readline()
        while (line != "" ):           
            if "ParNew" in line :
                    count=count+1                   
                    fields=line.split(" ")
                    logTime=fields[0]
                    logGCOffsetTime=fields[1]
                    logGCOffsetTime2=fields[3]
                    res=getFieldValue(fields[5])
                    YGPreSize,YGPostSize,YGTotalSize=res
                    YGElapsedTime=fields[6]
                    res=getFieldValue(fields[8])
                    OLDPreSize,OLDPostSize,OLDTotalSize=res
                    OLDElapsedTime=fields[9]
                    print line
                   
                    print "%d %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s\n"%(count,logTime,logGCOffsetTime,logGCOffsetTime2, \
                                                                   YGPreSize,YGPostSize,YGTotalSize, YGElapsedTime,\
                                                                   OLDPreSize,OLDPostSize,OLDTotalSize,OLDElapsedTime)
                    # print to file as CSV now
                    resultFileFD.write("%d,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s\n"%(count,logTime,logGCOffsetTime,logGCOffsetTime2, \
                                                                   YGPreSize,YGPostSize,YGTotalSize, YGElapsedTime,\
                                                                   OLDPreSize,OLDPostSize,OLDTotalSize,OLDElapsedTime))                   
                    
            line = logFD.readline()
            continue
        logFD.close();
    closeResultFile(resultFileFD);           
    print "finished processing log files"
    pass

Monday, November 18, 2013

Simple Passive Momentum Trading with Bollinger Band

Below, you can see a simple trading algorithm based on momentum and bollinger band on Quantopian.com

# Simple Passive Momentum Trading with Bollinger Band
import numpy as np
import statsmodels.api as stat
import statsmodels.tsa.stattools as ts

# globals for batch transform decorator
R_P = 1 # refresh period in days
W_L = 30 # window length in days
lookback=22
def initialize(context):
    context.stock = sid(24) # Apple (ignoring look-ahead bias)
    # for long and shorting 
    context.max_notional = 1000000
    context.min_notional = -1000000.0
    # set a fixed slippage
    set_slippage(slippage.FixedSlippage(spread=0.01))
                
def handle_data(context, data):
    # find moving average 
    rVal=getMeanStd(data)

    # lets dont do anything if we dont have enough data yet    
    if rVal is None:
        return    
    
    meanPrice,stdPrice = rVal
    price=data[context.stock].price
    notional = context.portfolio.positions[context.stock].amount * price
    
    # Passive momentum trading where for trading signal, Z-score is estimated
    h=((price-meanPrice)/stdPrice)
    # Bollinger band, if price is out of 2 std of moving mean, than lets trade     
    if h>2 and notional < context.max_notional  :
       # long
       order(context.stock,h*1000)
    if h<-2 and notional > context.min_notional:
       # short
       order(context.stock,h*1000)
     
@batch_transform(window_length=W_L, refresh_period=R_P) 
def getMeanStd(datapanel):
    prices = datapanel['price']
    meanPrice=prices.mean()
    stdPrice=prices.std()
    if meanPrice is not None and stdPrice is not None :
        return (meanPrice, stdPrice)
    else:
        return None

Screen shot of the back testing result is:
Click here to run algorithm on Quantopian.com.

Sunday, November 10, 2013

Cointegration Tests (ADF and Johansen) within R

In pair trading, in addition to correlation, cointegration can be very useful tool to determine which securities to be picked up. In this text, i demonstrate two approaches to test cointegration between two financial time series, two ETFs (EWA and EWC), within R. As you see below chart, in last 5 years, there is a cointegration between EWA and EWC.


ADF Test

In this test, we use linear regression to estimate spread between two securities and then ACF to test if spread is stationary, which in a way also test of cointegration for two securities.
>library("quantmod")  # To get data for symbols
> library("fUnitRoots") # Unit test 

## Lets get first data  for EWA and EWC from yahoo finance and extract adjusted close prices
>getSymbols("EWA")
>getSymbols("EWC")
>ewaAdj=unclass(EWA$EWA.Adjusted)
>ewcAdj=unclass(EWC$EWC.Adjusted)

## Now lets do linear regression where we assume drift is zero. Since we are not sure which security is dependent and independent, we need to apply following for both case

## EWC is dependent here 
> reg=lm (ewcAdj~ewaAdj+0)

## And now lets use adf test on spread (which is actually residuals of regression above step)
> adfTest(reg$residuals, type="nc")

Title:
 Augmented Dickey-Fuller Test

Test Results:
  PARAMETER:
    Lag Order: 1
  STATISTIC:
    Dickey-Fuller: -1.8082
  P VALUE:
    0.07148 

## EWA is dependent here this time
> reg=lm (ewaAdj~ewcAdj+0)
> adfTest(reg$residuals, type="nc") 

Title:
 Augmented Dickey-Fuller Test

Test Results:
  PARAMETER:
    Lag Order: 1
  STATISTIC:
    Dickey-Fuller: -1.7656
  P VALUE:
    0.07793 

We use most negative Dickey-Fuller value (-1.8082 and -1.7656) to choice which regression formula to use. Based on that, We choice EWC is dependent. Within 90% confidence level (p-value is 7%), we can reject null hypothesis (unit root), so we can assume spread (residual) is stationary, therefore there is a cointegration. Below coded a function for this purpose:

cointegrationTestLM_ADF <-function(A, B, startDate) {
  cat("Processing stock:",A ," and ", B, " start date:",startDate)
  
  aData=getSymbols(A,from=startDate,auto.assign = FALSE)
  aAdj=unclass(aData[,6])
  bData=getSymbols(B,from=startDate,auto.assign = FALSE)
  bAdj=unclass(bData[,6])
  lenA=length(aAdj)
  lenB=length(bAdj)
  N= min(lenA,lenB) 
  startA=0
  startB=0
  if (lenA!=N || lenB!=N){
    startA=lenA-N+1
    startB=lenB-N+1
  }
  cat("\nIndex start",A,":",startA," Length ",lenA )
  cat("\nIndex start",B,":",startB," Length ",lenB)
  aAdj=aAdj[startA:lenA,]
  bAdj=bAdj[startB:lenB,]
  
  regA=lm(aAdj~bAdj+0)
  
  summary(regA)
  regB=lm(bAdj~aAdj+0)
  summary(regB)
  
  coA <- adfTest(regA$residuals, type="nc")
  coB=adfTest(regB$residuals, type="nc")   
  
  
  cat("\n",A," p-value",coA@test$p.value," statistics:",coA@test$statistic)     
  cat("\n",B," p-value",coB@test$p.value," statistics:",coB@test$statistic)     
  
  
  # Lets choice most negative
  if (coA@test$statistic < coB@test$statistic){
   cat("\nStock ",A, " is dependent on stock ",B)
    cat("\np-value",coA@test$p.value," statistics:",coA@test$statistic)     
    p=coA@test$p.value
    s=coA@test$statistic
  }else {
    cat("\n Stock ",B, " is dependent on stock:",A)
    cat("\n p-value",coB@test$p.value," statistics:",coB@test$statistic)     
    p=coB@test$p.value
    s=coB@test$statistic     
   }   
  return(c(s,p))
}
How to run it:
res=cointegrationTestLM_ADF("EWA","EWC",'2007-01-01')
Processing stock: EWA  and  EWC  start date: 2007-01-01
Index start EWA : 0  Length  1731
Index start EWC : 0  Length  1731
 EWA  p-value 0.0501857  statistics: -1.948774
 EWC  p-value 0.04719164  statistics: -1.981454
 Stock  EWC  is dependent on stock: EWA
 p-value 0.04719164  statistics: -1.981454

res
  -1.98145360    0.04719164 

Johansen Test

As you see above ADF approach has some drawbacks such as:
- Not sure which security is dependent or independent
- Can not test multiple instruments

Johansen test addresses these points.
> library("urca") # For cointegration 

> coRes=ca.jo(data.frame(ewaAdj,ewcAdj),type="trace",K=2,ecdet="none", spec="longrun")
> summary(coRes)

###################### 
# Johansen-Procedure # 
###################### 

Test type: trace statistic , with linear trend 

Eigenvalues (lambda):
[1] 0.004881986 0.001200577

Values of teststatistic and critical values of test:

          test 10pct  5pct  1pct
r <= 1 |  2.07  6.50  8.18 11.65
r = 0  | 10.51 15.66 17.95 23.52

Eigenvectors, normalised to first column:
(These are the cointegration relations)

                EWA.Adjusted.l2 EWC.Adjusted.l2
EWA.Adjusted.l2        1.000000       1.0000000
EWC.Adjusted.l2       -1.253545      -0.3702406

Weights W:
(This is the loading matrix)

               EWA.Adjusted.l2 EWC.Adjusted.l2
EWA.Adjusted.d     0.007172485    -0.003894786
EWC.Adjusted.d     0.011970316    -0.001504604

Johansen test estimates the rank (r) of given matrix of time series with confidence level. In our example we have two time series, therefore Johansen tests null hypothesis of r=0 < (no cointegration at all), r<1 (till n-1, where n=2 in our example). As in example above, if r<=1 test value (2.07) was greater than a confidence level's value (say 10%: 6.50), we would assume there is a cointegration of r time series (in this case r<=1). But as you see, none of our test values are greater than than critical values at r<0 and r<=1, therefore there is no cointegration. This is opposite of ADF result we found above. Based on my some research, i've found that Johansen test can be misleading in some extreme case (see that discussion for more info). Once a cointegration is established, eigenvector (normalized first column) would be used as weight for a portfolio.

In addition above methods, KPSS(Kwiatkowski–Phillips–Schmidt–Shin)can be also used to test stationarity.

Sunday, November 03, 2013

Stationary Tests : Augmented Dickey–Fuller (ADF), Hurst Exponent, Variance Ratio (VRTest) of Time Series within R

Several trading strategies (momentum, mean reverting, ...) are based on if data is stationary or not. In this text, i demonstrate how to test it statistically.
library("quantmod") # for downloading fx data
library("pracma") # for hurst exponent 
library("vrtest") # variance ratio test
library("tseries") # for adf test
library("fUnitRoots")  # for adf test

## first lets fetch USD/CAD data for last 5 years
getFX("UDSCAD")
usdCad=unclass(USDCAD) # unwrap price column
# estimate log return
n=length(usdCad)
usdcadLog=log(usdCad[1:n])

## First use Augmented Dickey–Fuller Test (adf.test) to test USD/CAD is statationary
>adfTest(usdCad, lag=1)

Title:
 Augmented Dickey-Fuller Test

Test Results:
  PARAMETER:
    Lag Order: 1
  STATISTIC:
    Dickey-Fuller: 0.2556
  P VALUE:
    0.6978 

Description:
 Sun Nov 03 16:47:27 2013 by user: deniz

## As you see above, null hypothesis (unit root) can not be rejected with p-value ~70 %

## So we demonstrated it is not stationary. So if there is a trend or mean reverting. Hurst exponent (H) can be used for this purpose (Note Hursy exponent relies on that random walk diffuse in proportion to square root of time.). 
#Value of H can be interpreted such as: 
#H=0.5:Brownian motion (Random walk)
#H<0.5:Mean reverting  
#H>0.5:Trending 
> hurst(usdcadLog) Hurst exponent
[1] 0.9976377

#So, USDCAD is in trending phase. 

## Another way to test stationary is to use V:
> vrtest::Auto.VR(usdcadLog)
[1] 83.37723
> vrtest::Lo.Mac(usdcadLog,c(2,4,10))
$Stats
           M1       M2
k=2  22.10668 15.98633
k=4  35.03888 25.46031
k=10 56.21660 41.58861

## Another way to analyse stationary condition is via linear regression in which we will try to establish if there is a link between data and diff(data(t-1))

>deltaUsdcadLog=c(0,usdcadLog[2:n]-usdcadLog[1:n-1])
> r=lm(deltaUsdcadLog ~ usdcadLog)
> summary(r)

Call:
lm(formula = deltaUsdcadLog ~ usdcadLog)

Residuals:
       Min         1Q     Median         3Q        Max 
-0.0121267 -0.0013094 -0.0000982  0.0012327  0.0103982 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)  
(Intercept) -7.133e-05  1.306e-04  -0.546   0.5853  
usdcadLog    8.772e-03  5.234e-03   1.676   0.0944 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.002485 on 498 degrees of freedom
Multiple R-squared:  0.005608, Adjusted R-squared:  0.003611 
F-statistic: 2.808 on 1 and 498 DF,  p-value: 0.0944

> r$coefficients[2]
  usdcadLog 
0.008771754 

## Coefficient (Beta) gives clue about if there is mean reverting. If it is negative, there is a mean reverting.  As you see above, it is positive, therefore as we already concluded before, it is trending. If it was negative, we would use following to find out half life of mean revertion:

>-log(2)/r$coefficients[2]