This is all about what we mean by a 'state' of a system, and over the years different experts
have had different views on it. I expect there will be different opinions in the replies here.
There is a good related question here Does entropy depend on the observer?
and a link to a helpful paper by E. T. Jaynes
https://bayes.wustl.edu/etj/articles/gibbs.vs.boltzmann.pdf
One way to deal with your question is to say that the claim is not that $S$ has to increase,
only that it can't decrease, and your observation is consistent with that. I feel that this
reply would be inadequate, however. It seems to me that $S$ does increase as a system comes
to thermal equilibrium. But there you have the hint of the answer to your question: we only
want to claim that $S$ increases if the system was not already in thermal equilibrium to
begin with. But how did we know that it was not in thermal equilibrium at the start? We can only know
that if we know something in addition to the energy and other fixed external constraints.
So we are claiming to know that, initially, the system state is somewhere within a bunch
of microstates which is a subset of the states counted by $\Omega(E)$. Thus the
initial entropy is $S_0 = k_B \ln ( \Omega_0(E) )$ where $\Omega_0(E) < \Omega(E)$.
The above largely settles the matter. But I can't help feeling that a little more is needed.
The difficulty is that we want to make physics as far as possible about things which
can be agreed, and which are not dependent on subjective things such as the prior
knowledge that a given observer may have. That means we want to define entropy as far
as possible as a property of a system, not a statement about knowledge of an observer.
There is disagreement about how far this can be achieved, and, for myself, I have not
yet settled the matter completely in my own mind.
For anyone else who wishes to answer, I would like to encourage them to keep in mind
phenomena such as spin echo, in which,
owing to correlations in the initial state, the evolution of an isolated system
can appear to decrease entropy (there are many examples).
It's ultimately up to us how we choose to define
technical terms in physics, but the consensus is that the 2nd law of thermodynamics is
so useful that it would be a mistake to define entropy in such a way that spin echo
is an example of a refutation of the 2nd law. In the above-mentioned article Jaynes offers the following statement as a
good way to state the 2nd law:
'The experimental entropy cannot decrease in a reproducible [adiathermal] process
that starts from a state of complete thermal equilibrium.'
I put [adiathermal] in brackets because he used the word adiabatic, but I think he
means simply 'no heat transfer' not 'constant entropy' (the word has had a changing
meaning over the years).