It appears to be a bug.
In context: filtfilt() calls filter() to do some of the work.
When a and b are scalars, filter is going to be called as
where xt is going to have one row and number of columns equal to the number of columns in x. xt is initial conditions of the filters, and the initial values are calculated from the first two rows of the input x.
In this case of scalar a and scalar b, EMPTY is calculated as zeros(0,1) * xt . With xt being size 1 x ncol, the result of the * operation is going to be zeros(0, ncol)
- when you call filter() with scalar a, scalar b, array X, and EMPTY, then if if EMPTY is 0 x ncol, then filter will succeed for any 2D array size X that is not a row vector (and has ncol columns)
- when you call filter() with scalar a, scalar b, array X, and EMPTY, then if EMPTY is 0 x 1, then filter will succeed for any 2D array size X including row vector X
If you read this carefully, you will see that if X were not a row vector, then having a 0 x ncols initial conditions works. The error message is telling you that your initial conditions must be this size, 0 x ncols. But the error message is wrong for the case where X is a row vector, in which case the initial conditions must be 0 x 1 not 0 x ncols.
If I were to guess, I would guess that X as a row vector is being automatically transposed to X as a column vector, size ncols x 1, and it is wanting the 0 x ncols to match that number of columns (1) .
If I am correct, then the problem would be fixed if filtfilt() were to pass in the dimension to filter along.
In the meantime, all you can do is filtfilt() one column at a time of your signal.