The RHS of this equation is some sort of rotation of a vector whose components are themselves the group generators (Lie algebra basis elements). But the LHS is a conjugation? I'm uncomfortable with the idea that these are equivalent!
Recall that a Lie algebra $\mathfrak g$ is in particular a vector space, which can be equipped with some basis $\{T_k\}$. If $[\mathrm{Ad}(g)](T_k)$ is indeed another element of $\mathfrak{g}$, then we must be able to expand it as $ T_j c^j_k$ for some coefficients $c^j_{\ \ k}$. In that sense, we could always write
$$\big[\mathrm{Ad}(g)\big](T_k)= \big[\mathrm{Ad}(g)\big]^j_{\ \ k}T_j$$
where $\big[\mathrm{Ad}(g)\big]^j_{\ \ k}$ is the $(j,k)$-component of the linear map $\mathrm{Ad}(g)$. At this point it remains only to compute those components in some chosen basis.
In the specific case of $\mathrm{SO}(3)$, there's a nice isomorphism between the set of vectors in $\mathbb R^3$ and the antisymmetric $3\times 3$ matrices given by
$$A = \pmatrix{A_1\\A_2\\A_3} \leftrightarrow \pmatrix{0 &-A_3& A_2 \\ A_3&0&-A_1\\-A_2&A_1&0} \equiv A_\times$$
where the notation is chosen because for any vector $V\in \mathbb R^3$, $A_\times(V) = A \times V$. This is very useful here, because we note that if $R\in \mathrm SO(3)$, we have that for any vectors $V,W\in \mathbb R^3$
$$R(V \times W) = R(V) \times R(W)$$
This is ordinarily expressed as "the cross product of two vectors behaves like a vector under proper rotations." But this implies that
$$(RA)_\times V = (RA) \times V = (RA)\times (RR^\mathrm T V) = R\big(A \times (R^{\mathrm T} V)\big) = (RA_\times R^\mathrm T) V$$
$$\iff (RA)_\times = RA_\times R^\mathrm T$$
However, the right-hand side is precisely how $A_\times$ (understood as an element of $\mathfrak{so}(3)$) transforms under the adjoint action of $R$. As a result, we have that
$$\big[\mathrm{Ad}(R)\big](A_\times) = (RA)_\times$$
If we choose the standard basis $(L_k)_{\ell m} = -\epsilon_{k\ell m}$ for $\mathfrak{so}(3)$, then the vector $\tilde L_k$ corresponding to $L_k$ has components $(\tilde L_k)^i = \delta^i_{\ \ k}$, and so
$$\big[\mathrm{Ad}(R)\big](L_k) := R L_k R^\mathrm T = (R\tilde L_k)_\times \overset{\star}{=} R^\mu_{\ \ k} L_\mu$$
where the $\star$ denotes the omission of a few fairly straightforward lines of algebra.
How general is this formula?
Not particularly general. $\mathrm{SO}(3)$ is a special case in which the components of $\mathrm{Ad}(R)$ work out to be numerically equal to the components of $R$ itself; this is not typical.
Perhaps more intuitively than the formal tricks employed above, the fact that the $L_k$'s transform in this way under the adjoint action of $\mathrm{SO}(3)$ is equivalent to the statement that $\vec L \equiv (L_1,L_2,L_3)$ is a vector operator with the property that under rotations induced by the unitary operator $U(R)$, we should have that
$$\langle L_j\rangle \equiv \langle \psi, L_j \psi \rangle \mapsto \langle U(R)^\dagger\psi, L_j U(R)^\dagger\psi\rangle = \langle U(R) L_j U(R)^\dagger\rangle \overset{!}{=} R^k_{\ \ j} \langle L_k\rangle$$
Similar expressions exist for tensor operators of higher rank.
As a final note, I spoke in this about the action of $\mathrm{SO}(3)$ on $\mathfrak{so}(3)$, not $\mathfrak{su}(2)$; however, it is not difficult to show that these two Lie algebras are isomorphic, with the linear isomorphism $L_i \leftrightarrow \frac{1}{2}\sigma_i$. We ordinarily do not distinguish between them for this reason; $\mathrm{Ad}(R)$ can be understood as a linear map $\mathfrak{su}(2)\rightarrow\mathfrak{su}(2)$ with the same components as above, i.e.
$$\big[\mathrm{Ad}(R)\big](\sigma_k) = R^j_{\ \ k} \sigma_j$$
Along similar lines, because $\mathrm{SO}(3)$ is compact and connected, we can write any $R$ as $e^A$ for some $A\in \mathfrak{so}(3)$. Mapping this $A = A^\mu L_\mu \mapsto \frac{1}{2}A^\mu \sigma_\mu = \tilde A\in \mathfrak{su}(2)$, we exponentiate to obtain the unitary matrices $U(R) = e^{i\tilde A}$.