11 Ensemble models
Let us be honest. When facing a prediction task, it is not obvious to determine the best choice between ML tools: penalized regressions, tree methods, neural networks, SVMs, etc. A natural and tempting alternative is to combine several algorithms (or the predictions that result from them) to try to extract value out of each engine (or learner). This intention is not new and contributions towards this goal go back at least to Bates and Granger (1969) (for the purpose of passenger flow forecasting).
Below, we outline a few books on the topic of ensembles. The latter have many names and synonyms, such as forecast aggregation, model averaging, mixture of experts or prediction combination. The first four references below are monographs, while the last two are compilations of contributions:
-
Zhou (2012): a very didactic book that covers the main ideas of ensembles;
-
Schapire and Freund (2012): the main reference for boosting (and hence, ensembling) with many theoretical results and thus strong mathematical groundings;
-
Seni and Elder (2010): an introduction dedicated to tree methods mainly;
-
Claeskens and Hjort (2008): an overview of model selection techniques with a few chapters focused on model averaging;
-
C. Zhang and Ma (2012): a collection of thematic chapters on ensemble learning;
- Okun, Valentini, and Re (2011): examples of applications of ensembles.
In this chapter, we cover the basic ideas and concepts behind the notion of ensembles. We refer to the above books for deeper treatments on the topic. We underline that several ensemble methods have already been mentioned and covered earlier, notably in Chapter 6. Indeed, random forests and boosted trees are examples of ensembles. Hence, other early articles on the combination of learners are Schapire (1990), R. A. Jacobs et al. (1991) (for neural networks particularly), and Freund and Schapire (1997). Ensembles can for instance be used to aggregate models that are built on different datasets (Pesaran and Pick (2011)), and can be made time-dependent (Sun et al. (2020)). For a theoretical view on ensembles, we refer to Peng and Yang (2021a) and Razin and Levy (2020) (Bayesian perspective). Forecast combinations for returns are investigated in Cheng and Zhao (2022) and widely reviewed by X. Wang et al. (2022) - see also Scholz (2022) for a contrarian take (ensembles don’t always work well). Finally, perspectives linked to asset pricing and factor modelling are provided in Gospodinov and Maasoumi (2020) and De Nard, Hediger, and Leippold (2020) (subsampling and forecast aggregation).
11.1 Linear ensembles
11.1.1 Principles
In this chapter we adopt the following notations. We work with \(M\) models where \(\tilde{y}_{i,m}\) is the prediction of model \(m\) for instance \(i\) and errors \(\epsilon_{i,m}=y_i-\tilde{y}_{i,m}\) are stacked into a \((I\times M)\) matrix \(\textbf{E}\). A linear combination of models has sample errors equal to \(\textbf{Ew}\), where \(\textbf{w}=w_m\) are the weights assigned to each model and we assume \(\textbf{w}'\textbf{1}_M=1\). Minimizing the total (squared) error is thus a simple quadratic program with unique constraint. The Lagrange function is \(L(\textbf{w})=\textbf{w}'\textbf{E}'\textbf{E}\textbf{w}-\lambda (\textbf{w}'\textbf{1}_M-1)\) and hence \[\frac{\partial}{\partial \textbf{w}}L(\textbf{w})=\textbf{E}'\textbf{E}\textbf{w}-\lambda \textbf{1}_M=0 \quad \Leftrightarrow \quad \textbf{w}=\lambda(\textbf{E}'\textbf{E})^{-1}\textbf{1}_M,\]
and the constraint imposes \(\textbf{w}^*=\frac{(\textbf{E}'\textbf{E})^{-1}\textbf{1}_M}{(\textbf{1}_M'\textbf{E}'\textbf{E})^{-1}\textbf{1}_M}\). This form is similar to that of minimum variance portfolios. If errors are unbiased (\(\textbf{1}_I'\textbf{E}=\textbf{0}_M'\)), then \(\textbf{E}'\textbf{E}\) is the covariance matrix of errors.
This expression shows an important feature of optimized linear ensembles: they can only add value if the models tell different stories. If two models are redundant, \(\textbf{E}'\textbf{E}\) will be close to singular and \(\textbf{w}^*\) will arbitrage one against the other in a spurious fashion. This is the exact same problem as when mean-variance portfolios are constituted with highly correlated assets: in this case, diversification fails because when things go wrong, all assets go down. Another problem arises when the number of observations is too small compared to the number of assets so that the covariance matrix of returns is singular. This is not an issue for ensembles because the number of observations will usually be much larger than the number of models (\(I>>M\)).
In the limit when correlations increase to one, the above formulation becomes highly unstable and ensembles cannot be trusted. One heuristic way to see this is when \(M=2\) and \[\textbf{E}'\textbf{E}=\left[ \begin{array}{cc} \sigma_1^2 & \rho\sigma_1\sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2 \\ \end{array} \right] \quad \Leftrightarrow \quad (\textbf{E}'\textbf{E})^{-1}=\frac{1}{1-\rho^2}\left[ \begin{array}{cc} \sigma_1^{-2} & -\rho(\sigma_1\sigma_2)^{-1} \\ -\rho(\sigma_1\sigma_2)^{-1} & \sigma_2^{-2} \\ \end{array} \right]\]
so that when \(\rho \rightarrow 1\), the model with the smallest errors (minimum \(\sigma_i^2\)) will see its weight increasing towards infinity while the other model will have a similarly large negative weight: the model arbitrages between two highly correlated variables. This seems like a very bad idea.
There is another illustration of the issues caused by correlations. Let’s assume we face \(M\) correlated errors \(\epsilon_m\) with pairwise correlation \(\rho\), zero mean and variance \(\sigma^2\). The variance of errors is \[\begin{align*} \mathbb{E}\left[\frac{1}{M}\sum_{m=1}^M \epsilon_m^2 \right]&=\frac{1}{M^2}\left[\sum_{m=1}^M\epsilon_m^2+\sum_{m\neq n}\epsilon_n\epsilon_m\right] \\ &=\frac{\sigma^2}{M}+\frac{1}{M^2}\sum_{n\neq m} \rho \sigma^2 \\ & =\rho \sigma^2 +\frac{\sigma^2(1-\rho)}{M} \end{align*}\] where while the second term converges to zero as \(M\) increases, the first term remains and is linearly increasing with \(\rho\). In passing, because variances are always positive, this result implies that the common pairwise correlation between \(M\) variables is bounded below by \(-(M-1)^{-1}\). This result is interesting but rarely found in textbooks.
One improvement proposed to circumvent the trouble caused by correlations, advocated in a seminal publication (Breiman (1996)), is to enforce positivity constraints on the weights and solve
\[\underset{\textbf{w}}{\text{argmin}} \ \textbf{w}'\textbf{E}'\textbf{E}\textbf{w} , \quad \text{s.t.} \quad \left\{ \begin{array}{l} \textbf{w}'\textbf{1}_M=1 \\ w_m \ge 0 \quad \forall m \end{array}\right. .\]
Mechanically, if several models are highly correlated, the constraint will impose that only one of them will have a nonzero weight. If there are many models, then just a few of them will be selected by the minimization program. In the context of portfolio optimization, Jagannathan and Ma (2003) have shown the counter-intuitive benefits of constraints in the construction of mean-variance allocations. In our setting, the constraint will similarly help discriminate wisely among the ‘best’ models.
In the literature, forecast combination and model averaging (which are synonyms of ensembles) have been tested on stock markets as early as in Von Holstein (1972). Surprisingly, the articles were not published in Finance journals but rather in fields such as Management (Virtanen and Yli-Olli (1987), J.-J. Wang et al. (2012)), Economics and Econometrics (Donaldson and Kamstra (1996), Clark and McCracken (2009), Mascio, Fabozzi, and Zumwalt (2020)), Operations Reasearch (W. Huang, Nakamori, and Wang (2005), Leung, Daouk, and Chen (2001), and Bonaccolto and Paterlini (2019)), and Computer Science (Harrald and Kamstra (1997), Hassan, Nath, and Kirley (2007)).
In the general forecasting literature, many alternative (refined) methods for combining forecasts have been studied. Trimmed opinion pools (Grushka-Cockayne, Jose, and Lichtendahl Jr (2016)) compute averages over the predictions that are not too extreme (or not too noisy, see Chiang, Liao, and Zhou (2021)). Ensembles with weights that depend on previous past errors are developed in Pike and Vazquez-Grande (2020). We refer to Gaba, Tsetlin, and Winkler (2017) for a more exhaustive list of combinations as well as for an empirical study of their respective efficiency. Finally, for a theoretical discussion on model averaging versus model selection, we point to Peng and Yang (2021b). Overall, findings are mixed and the heuristic simple average is, as usual, hard to beat (see, e.g., Genre et al. (2013)).
11.1.2 Example
In order to build an ensemble, we must gather the predictions and the corresponding errors into the \(\textbf{E}\) matrix. We will work with 5 models that were trained in the previous chapters: penalized regression, simple tree, random forest, xgboost and feed-forward neural network. The training errors have zero means, hence \(\textbf{E}'\textbf{E}\) is the covariance matrix of errors between models.
err_pen_train <- predict(fit_pen_pred, x_penalized_train) - training_sample$R1M_Usd # Reg.
err_tree_train <- predict(fit_tree, training_sample) - training_sample$R1M_Usd # Tree
err_RF_train <- predict(fit_RF, training_sample) - training_sample$R1M_Usd # RF
err_XGB_train <- predict(fit_xgb, train_matrix_xgb) - training_sample$R1M_Usd # XGBoost
err_NN_train <- predict(model, NN_train_features) - training_sample$R1M_Usd # NN
E <- cbind(err_pen_train, err_tree_train, err_RF_train, err_XGB_train, err_NN_train) # E matrix
colnames(E) <- c("Pen_reg", "Tree", "RF", "XGB", "NN") # Names
cor(E) # Cor. mat.
## Pen_reg Tree RF XGB NN
## Pen_reg 1.0000000 0.9984394 0.9968224 0.9310186 0.9962147
## Tree 0.9984394 1.0000000 0.9974647 0.9296081 0.9969773
## RF 0.9968224 0.9974647 1.0000000 0.9281725 0.9970392
## XGB 0.9310186 0.9296081 0.9281725 1.0000000 0.9277433
## NN 0.9962147 0.9969773 0.9970392 0.9277433 1.0000000
As is shown by the correlation matrix, the models fail to generate heterogeneity in their predictions. The minimum correlation (though above 95%!) is obtained by the boosted tree models. Below, we compare the training accuracy of models by computing the average absolute value of errors.
## Pen_reg Tree RF XGB NN
## 0.08345916 0.08362133 0.08327121 0.08986993 0.08368222
The best performing ML engine is the random forest. The boosted tree model is the worst, by far. Below, we compute the optimal (non-constrained) weights for the combination of models.
w_ensemble <- solve(t(E) %*% E) %*% rep(1,5) # Optimal weights
w_ensemble <- w_ensemble / sum(w_ensemble)
w_ensemble
## [,1]
## Pen_reg -0.642247634
## Tree -0.100397807
## RF 1.242080559
## XGB -0.002966771
## NN 0.503531653
Because of the high correlations, the optimal weights are not balanced and diversified: they load heavily on the random forest learner (best in sample model) and ‘short’ a few models in order to compensate. As one could expect, the model with the largest negative weights (Pen_reg) has a very high correlation with the random forest algorithm (0.997).
Note that the weights are of course computed with training errors. The optimal combination is then tested on the testing sample. Below, we compute out-of-sample (testing) errors and their average absolute value.
err_pen_test <- predict(fit_pen_pred, x_penalized_test) - testing_sample$R1M_Usd # Reg.
err_tree_test <- predict(fit_tree, testing_sample) - testing_sample$R1M_Usd # Tree
err_RF_test <- predict(fit_RF, testing_sample) - testing_sample$R1M_Usd # RF
err_XGB_test <- predict(fit_xgb, xgb_test) - testing_sample$R1M_Usd # XGBoost
err_NN_test <- predict(model, NN_test_features) - testing_sample$R1M_Usd # NN
E_test <- cbind(err_pen_test, err_tree_test, err_RF_test, err_XGB_test, err_NN_test) # E matrix
colnames(E_test) <- c("Pen_reg", "Tree", "RF", "XGB", "NN")
apply(abs(E_test), 2, mean) # Mean absolute error or columns of E
## Pen_reg Tree RF XGB NN
## 0.06618181 0.06653527 0.06710349 0.07170801 0.06772966
The boosted tree model is still the worst performing algorithm while the simple models (regression and simple tree) are the ones that fare the best. The most naive combination is the simple average of model and predictions.
## [1] 0.06700998
Because the errors are very correlated, the equally weighted combination of forecasts yields an average error which lies ‘in the middle’ of individual errors. The diversification benefits are too small. Let us now test the ‘optimal’ combination \(\textbf{w}^*=\frac{(\textbf{E}'\textbf{E})^{-1}\textbf{1}_M}{(\textbf{1}_M'\textbf{E}'\textbf{E})^{-1}\textbf{1}_M}\).
## [1] 0.06862346
Again, the result is disappointing because of the lack of diversification across models. The correlations between errors are high not only on the training sample, but also on the testing sample, as shown below.
cor(E_test)
## Pen_reg Tree RF XGB NN
## Pen_reg 1.0000000 0.9987069 0.9968882 0.9537914 0.9956064
## Tree 0.9987069 1.0000000 0.9978366 0.9583641 0.9968828
## RF 0.9968882 0.9978366 1.0000000 0.9606570 0.9973225
## XGB 0.9537914 0.9583641 0.9606570 1.0000000 0.9616208
## NN 0.9956064 0.9968828 0.9973225 0.9616208 1.0000000
The leverage from the optimal solution only exacerbates the problem and underperforms the heuristic uniform combination. We end this section with the constrained formulation of Breiman (1996) using the quadprog package. If we write \(\mathbf{\Sigma}\) for the covariance matrix of errors, we seek \[\mathbf{w}^*=\underset{\mathbf{w}}{\text{argmin}} \ \mathbf{w}'\mathbf{\Sigma}\mathbf{w}, \quad \mathbf{1}'\mathbf{w}=1, \quad w_i\ge 0,\] The constraints will be handled as:
\[\mathbf{A} \mathbf{w}= \begin{bmatrix} 1 & 1 & 1 \\ 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \mathbf{w} \hspace{9mm} \text{ compared to} \hspace{9mm} \mathbf{b}=\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \]
where the first line will be an equality (weights sum to one) and the last three will be inequalities (weights are all positive).
library(quadprog) # Package for quadratic programming
Sigma <- t(E) %*% E # Unscaled covariance matrix
nb_mods <- nrow(Sigma) # Number of models
w_const <- solve.QP(Dmat = Sigma, # D matrix = Sigma
dvec = rep(0, nb_mods), # Zero vector
Amat = rbind(rep(1, nb_mods), diag(nb_mods)) %>% t(), # A matrix for constraints
bvec = c(1,rep(0, nb_mods)), # b vector for constraints
meq = 1 # 1 line of equality constraints, others = inequalities
)
w_const$solution %>% round(3) # Solution
## [1] 0.000 0.000 0.745 0.000 0.255
Compared to the unconstrained solution, the weights are sparse and concentrated in one or two models, usually those with small training sample errors.
11.2 Stacked ensembles
11.2.1 Two-stage training
Stacked ensembles are a natural generalization of linear ensembles. The idea of generalizing linear ensembles goes back at least to Wolpert (1992b). In the general case, the training is performed in two stages. The first stage is the simple one, whereby the \(M\) models are trained independently, yielding the predictions \(\tilde{y}_{i,m}\) for instance \(i\) and model \(m\). The second step is to consider the output of the trained models as input for a new level of machine learning optimization. The second level predictions are \(\breve{y}_i=h(\tilde{y}_{i,1},\dots,\tilde{y}_{i,M})\), where \(h\) is a new learner (see Figure 11.1). Linear ensembles are of course stacked ensembles in which the second layer is a linear regression.
The same techniques are then applied to minimize the error between the true values \(y_i\) and the predicted ones \(\breve{y}_i\).
11.2.2 Code and results
Below, we create a low-dimensional neural network which takes in the individual predictions of each model and compiles them into a synthetic forecast.
model_stack <- keras_model_sequential()
model_stack %>% # This defines the structure of the network, i.e. how layers are organized
layer_dense(units = 8, activation = 'relu', input_shape = nb_mods) %>%
layer_dense(units = 4, activation = 'tanh') %>%
layer_dense(units = 1)
The configuration is very simple. We do not include any optional arguments and hence the model is likely to overfit. As we seek to predict returns, the loss function is the standard \(L^2\) norm.
model_stack %>% compile( # Model specification
loss = 'mean_squared_error', # Loss function
optimizer = optimizer_rmsprop(), # Optimisation method (weight updating)
metrics = c('mean_absolute_error') # Output metric
)
summary(model_stack) # Model architecture
## Model: "sequential_12"
## __________________________________________________________________________________________
## Layer (type) Output Shape Param #
## ==========================================================================================
## dense_30 (Dense) (None, 8) 48
## dense_29 (Dense) (None, 4) 36
## dense_28 (Dense) (None, 1) 5
## ==========================================================================================
## Total params: 89
## Trainable params: 89
## Non-trainable params: 0
## __________________________________________________________________________________________
y_tilde <- E + matrix(rep(training_sample$R1M_Usd, nb_mods), ncol = nb_mods) # Train preds
y_test <- E_test + matrix(rep(testing_sample$R1M_Usd, nb_mods), ncol = nb_mods) # Testing
fit_NN_stack <- model_stack %>% fit(y_tilde, # Train features
training_sample$R1M_Usd, # Train labels
epochs = 12, batch_size = 512, # Train parameters
validation_data = list(y_test, # Test features
testing_sample$R1M_Usd) # Test labels
)
plot(fit_NN_stack) # Plot, evidently!
The performance of the ensemble is again disappointing: the learning curve is flat in Figure 11.2, hence the rounds of back-propagation are useless. The training adds little value which means that the new overarching layer of ML does not enhance the original predictions. Again, this is because all ML engines seem to be capturing the same patterns and both their linear and non-linear combinations fail to improve their performance.
11.3 Extensions
11.3.1 Exogenous variables
In a financial context, macro-economic indicators could add value to the process. It is possible that some models perform better under certain conditions and exogenous predictors can help introduce a flavor of economic-driven conditionality in the predictions.
Adding macro-variables to the set of predictors (here, predictions) \(\tilde{y}_{i,m}\) could seem like one way to achieve this. However, this would amount to mix predicted values with (possibly scaled) economic indicators and that would not make much sense.
One alternative outside the perimeter of ensembles is to train simple trees on a set of macro-economic indicators. If the labels are the (possibly absolute) errors stemming from the original predictions, then the trees will create clusters of homogeneous error values. This will hint towards which conditions lead to the best and worst forecasts. We test this idea below, using aggregate data from the Federal Reserve of Saint Louis. A simple downloading function is available in the quantmod package. We download and format the data in the next chunk. CPIAUCSL is a code for consumer price index and T10Y2YM is a code for the term spread (10Y minus 2Y).
library(quantmod) # Package that extracts the data
library(lubridate) # Package for date management
getSymbols("CPIAUCSL", src = "FRED") # FRED is the Fed of St Louis
## [1] "CPIAUCSL"
getSymbols("T10Y2YM", src = "FRED")
## [1] "T10Y2YM"
cpi <- fortify(CPIAUCSL) %>%
mutate (inflation = CPIAUCSL / lag(CPIAUCSL) - 1) # Inflation via Consumer Price Index
ts <- fortify(T10Y2YM) # Term spread (10Y minus 2Y rates)
colnames(ts)[2] <- "termspread" # To make things clear
ens_data <- testing_sample %>% # Creating aggregate dataset
dplyr::select(date) %>%
cbind(err_NN_test) %>%
mutate(Index = make_date(year = lubridate::year(date), # Change date to first day of month
month = lubridate::month(date),
day = 1)) %>%
left_join(cpi) %>% # Add CPI to the dataset
left_join(ts) # Add termspread
head(ens_data) # Show first lines
## date err_NN_test Index CPIAUCSL inflation termspread
## 1 2014-01-31 -0.144565900 2014-01-01 235.288 0.002424175 2.47
## 2 2014-02-28 0.083275435 2014-02-01 235.547 0.001100779 2.38
## 3 2014-03-31 -0.006073933 2014-03-01 236.028 0.002042055 2.32
## 4 2014-04-30 -0.068780374 2014-04-01 236.468 0.001864186 2.29
## 5 2014-05-31 -0.079518815 2014-05-01 236.918 0.001903006 2.17
## 6 2014-06-30 0.049535311 2014-06-01 237.231 0.001321132 2.15
We can now build a tree that tries to explain the accuracy of models as a function of macro-variables.
library(rpart.plot) # Load package for tree plotting
fit_ens <- rpart(abs(err_NN_test) ~ inflation + termspread, # Tree model
data = ens_data,
cp = 0.001) # Complexity param (size of tree)
rpart.plot(fit_ens) # Plot tree
The tree creates clusters which have homogeneous values of absolute errors. One big cluster gathers 92% of predictions (the left one) and is the one with the smallest average. It corresponds to the periods when the term spread is above 0.29 (in percentage points). The other two groups (when the term spread is below 0.29%) are determined according to the level of inflation. If the latter is positive, then the average absolute error is 7%, if not, it is 12%. This last number, the highest of the three clusters, indicates that when the term spread is low and the inflation negative, the model’s predictions are not trustworthy because their errors have a magnitude twice as large as in other periods. Under these circumstances (which seem to be linked to a dire economic environment), it may be wiser not to use ML-based forecasts.
11.3.2 Shrinking inter-model correlations
As shown earlier in this chapter, one major problem with ensembles arises when the first layer of predictions is highly correlated. In this case, ensembles are pretty much useless. There are several tricks that can help reduce this correlation, but the simplest and best is probably to alter training samples. If algorithms do not see the same data, they will probably infer different patterns.
There are several ways to split the training data so as to build different subsets of training samples. The first dichotomy is between random versus deterministic splits. Random splits are easy and require only the target sample size to be fixed. Note that the training samples can be overlapping as long as the overlap is not too large. Hence if the original training sample has \(I\) instance and the ensemble requires \(M\) models, then a subsample size of \(\lfloor I/M \rfloor\) may be too conservative especially if the training sample is not very large. In this case \(\lfloor I/\sqrt{M} \rfloor\) may be a better alternative. Random forests are one example of ensembles built in random training samples.
One advantage of deterministic splits is that they are easy to reproduce and their outcome does not depend on the random seed. By the nature of factor-based training samples, the second splitting dichotomy is between time and assets. A split within assets is straightforward: each model is trained on a different set of stocks. Note that the choices of sets can be random, or dictacted by some factor-based criterion: size, momentum, book-to-market ratio, etc.
A split in dates requires other decisions: is the data split in large blocks (like years) and each model gets a block, which may stand for one particular kind of market condition? Or are the training dates divided more regularly? For instance, if there are 12 models in the ensemble, each model can be trained on data from a given month (e.g., January for the first models, February for the second, etc.).
Below, we train four models on four different years to see if this helps reduce the inter-model correlations. This process is a bit lengthy because the samples and models need to be all redefined. We start by creating the four training samples. The third model works on the small subset of features, hence the sample is smaller.
training_sample_2007 <- training_sample %>%
filter(date > "2006-12-31", date < "2008-01-01")
training_sample_2009 <- training_sample %>%
filter(date > "2008-12-31", date < "2010-01-01")
training_sample_2011 <- training_sample %>%
dplyr::select(c("date",features_short, "R1M_Usd")) %>%
filter(date > "2010-12-31", date < "2012-01-01")
training_sample_2013 <- training_sample %>%
filter(date > "2012-12-31", date < "2014-01-01")
Then, we proceed to the training of the models. The syntaxes are those used in the previous chapters, nothing new here. We start with a penalized regression. In all predictions below, the original testing sample is used for all models.
y_ens_2007 <- training_sample_2007$R1M_Usd # Dep. var.
x_ens_2007 <- training_sample_2007 %>% # Predictors
dplyr::select(features) %>% as.matrix()
fit_ens_2007 <- glmnet(x_ens_2007, y_ens_2007, alpha = 0.1, lambda = 0.1) # Model
err_ens_2007 <- predict(fit_ens_2007, x_penalized_test) - testing_sample$R1M_Usd # Pred. errs
We continue with a random forest.
fit_ens_2009 <- randomForest(formula, # Same formula as for simple trees!
data = training_sample_2009, # Data source: 2011 training sample
sampsize = 4000, # Size of (random) sample for each tree
replace = FALSE, # Is the sampling done with replacement?
nodesize = 100, # Minimum size of terminal cluster
ntree = 40, # Nb of random trees
mtry = 30 # Nb of predictive variables for each tree
)
err_ens_2009 <- predict(fit_ens_2009, testing_sample) - testing_sample$R1M_Usd # Pred. errs
The third model is a boosted tree.
train_features_2011 <- training_sample_2011 %>%
dplyr::select(features_short) %>% as.matrix() # Independent variable
train_label_2011 <- training_sample_2011 %>%
dplyr::select(R1M_Usd) %>% as.matrix() # Dependent variable
train_matrix_2011 <- xgb.DMatrix(data = train_features_2011,
label = train_label_2011) # XGB format!
fit_ens_2011 <- xgb.train(data = train_matrix_2011, # Data source
eta = 0.4, # Learning rate
objective = "reg:linear", # Objective function
max_depth = 4, # Maximum depth of trees
nrounds = 18 # Number of trees used
)
## [13:49:44] WARNING: src/objective/regression_obj.cu:213: reg:linear is now deprecated in favor of reg:squarederror.
err_ens_2011 <- predict(fit_ens_2011, xgb_test) - testing_sample$R1M_Usd # Prediction errors
Finally, the last model is a simple neural network.
NN_features_2013 <- dplyr::select(training_sample_2013, features) %>%
as.matrix() # Matrix format is important
NN_labels_2013 <- training_sample_2013$R1M_Usd
model_ens_2013 <- keras_model_sequential()
model_ens_2013 %>% # This defines the structure of the network, i.e. how layers are organized
layer_dense(units = 16, activation = 'relu', input_shape = ncol(NN_features_2013)) %>%
layer_dense(units = 8, activation = 'tanh') %>%
layer_dense(units = 1)
model_ens_2013 %>% compile( # Model specification
loss = 'mean_squared_error', # Loss function
optimizer = optimizer_rmsprop(), # Optimisation method (weight updating)
metrics = c('mean_absolute_error') # Output metric
)
model_ens_2013 %>% fit(NN_features_2013, # Training features
NN_labels_2013, # Training labels
epochs = 9, batch_size = 128 # Training parameters
)
err_ens_2013 <- predict(model_ens_2013, NN_test_features) - testing_sample$R1M_Usd
Endowed with the errors of the four models, we can compute their correlation matrix.
## err_ens_2007 err_ens_2009 err_ens_2011 err_ens_2013
## err_ens_2007 1.0000000 0.9570006 0.6460091 0.9981763
## err_ens_2009 0.9570006 1.0000000 0.6290043 0.9616214
## err_ens_2011 0.6460091 0.6290043 1.0000000 0.6452839
## err_ens_2013 0.9981763 0.9616214 0.6452839 1.0000000
The results are overall disappointing. Only one model manages to extract patterns that are somewhat different from the other ones, resulting in a 65% correlation across the board. Neural networks (on 2013 data) and penalized regressions (2007) remain highly correlated. One possible explanation could be that the models capture mainly noise and little signal. Working with long-term labels like annual returns could help improve diversification across models.
11.4 Exercise
Build an integrated ensemble on top of 3 neural networks trained entirely with Keras. Each network obtains one third of predictors as input. The three networks yield a classification (yes/no or buy/sell). The overarching network aggregates the three outputs into a final decision. Evaluate its performance on the testing sample. Use the functional API.