The answer to this question is: yes, there is indeed a connection between entropy and complexity. However, there is no single answer. In fact, there are several formulations of entropy, and entropy alone is not sufficient to describe complexity. To be more precise, we can first distinguish between Shannon entropy (usually denoted by the symbol H or h for differential entropy), which quantifies the unevenness in the probability distribution $p(x)$, and entropy S, which is defined as a function of the density matrix. The Shannon entropy of a discrete random variable X is defined as:
\begin{equation}
H(X)=-\sum_{x \in X} p(x)\log_{2} p(x)
\end{equation}
The logarithm base provides the entropy’s unit. In particular, a fully localized probability distribution with $p=1$ reflects the minimum $H(X) = 0$ reached for a constant random variable, i.e., a variable with a given outcome. On the other hand, $H(X)$ is maximum for a uniform distribution. In other words, the more unexpected or improbable an event is, the greater the entropy H is. Take, for example, the prediction "the sun will not rise tomorrow". Should this prediction come true, the affirmative content would be extremely high. In fact, we expect the sun to rise every day, so the probability of it not rising is practically zero. Information and complexity are related in the sense that a complex structure requires a more extensive description than a simple structure (which is thus characterized by less information). To describe a circle we need only one parameter, the radius, but more parameters are needed to characterize more complex geometric structures. This statement suggests that there is a relationship between the amount of information needed to describe a structure and its complexity. On the other hand, if we consider an ideal and isolated gas consisting of N molecules, we can see that its distribution in space does not follow any particular order. The system can be in any of its possible states, and with the same probability. In this case, the distribution of molecules in the gas is uniform and the entropy is maximum. This situation corresponds to the maximum information required to describe a completely disordered system. Can we say that the maximum disorder corresponds to the maximum complexity? The answer is no.
Lopez-Ruiz, Mancini, and Calbet introduced a measure of complexity they called LCM, which defines complexity as the product of entropy and "disequilibrium" D. The latter is a measure of how far we are from an equilibrium situation, where equilibrium is characterized by a situation in which there are no states that are more probable than others. When we introduce disequilibrium, the LCM complexity vanishes for both fully ordered and fully random systems. In fact, D vanishes in the latter scenario, but H = 0 exists in the former. As a result, the measure C is expected to reach maximum values halfway between order and randomness.This definition agrees with what Theodore Modis wrote. However, this definition is not unique. In the field of systems engineering, Sinha K. defines complexity in a completely different way Sinha thesis;however, his definition uses Shannon entropy.
Complexity (in this case algorithmic) can be defined in terms of Kolmogorov. In this case, complexity, although not computable, corresponds to the shortest description one can make of a system.In this brief explanation I have taken the liberty of sacrificing scientific rigor in the hope of giving a more intuitive explanation. The conclusion is that there is no single definition of complexity and the various formulations do not allow us to establish a unique relationship between complexity and entropy.