Noise reduction algorithms in current hearing devices lack information about the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention using electroencephalography (EEG) sensors. State-of-the-art AAD algorithms employ a stimulus reconstruction approach, in which the envelope of the attended source is reconstructed from the EEG and correlated with the envelopes of the individual sources. This approach, however, performs poorly on short signal segments, while longer segments yield impractically long detection delays when the user switches attention. Therefore, we propose decoding the directional focus of attention using filterbank common spatial pattern filters (FB-CSP) as an alternative AAD paradigm, which does not require access to the clean source envelopes. On short signal segments, this approach outperforms both the stimulus reconstruction approach and a convolutional neural network approach. We achieve a high accuracy (80% for 1s windows and 70% for quasi-instantaneous decisions), which is sufficient to reach minimal expected switch durations below 4s. We also demonstrate that the decoder can adapt to unlabeled data from an unseen subject and works with only a subset of EEG channels located around the ear to emulate a wearable EEG setup. As the proposed FB-CSP method provides accurate high-speed decoding of the directional focus of auditory attention, it is a major step forward towards practical neuro-steered hearing devices.