There is something I don't quite get about relativistic velocities, which I am hoping to clarify with this question.
Suppose you have an emitter and receiver both located somewhere far away from one another, and far away from any gravitational field. The emitter and receiver do not need to be synchronized in any way. They are both in a reference frame such that they appear to be at rest with one another. For clarity, all "speeds," "velocities" and etc are with respect to this reference frame.
The emitter then transmits two electrons simultaneously toward the receiver. Each electron is propelled forward using some precisely determined amount of kinetic energy, such that we can compute either its Newtonian or relativistic speed just from its initial kinetic energy. These two kinetic energies are different, but sufficiently high that relativistic effects become important. The receiver then records the time differential between the two electrons arriving.
The basic idea is this: the Newtonian and relativistic equations for kinetic energy give different predictions of what this time differential is. In the relativistic framework, the same amount of kinetic energy leads to lower resulting speeds than the Newtonian one, and this changes the time differential. The amount to which these two predictions differ depends only on the value of $c$, such that as $c \to \infty$, the relativistic prediction agrees with the Newtonian one.
To put some numbers to this: suppose we sent the first electron with 1 GeV of kinetic energy, and the second with 10 MeV. Then Newtonian mechanics predicts a time differential of 144 ms, and SR predicts a time differential of only 1 ms. This is because in Newtonian mechanics, the 1 GeV electron is traveling 10x as fast as the 10 MeV electron, and thus gets there much faster. But in SE, both are traveling at approximately the same speed, which is getting asymptotically very close to $c$. Thus, the time between the two measurements is very small. Basically, we are measuring the extent to which increasing the kinetic energy of an electron beyond a certain point fails to increase its speed an appreciable amount. If we were to imagine that $c$ were higher, the measured time between the two would increase and tend toward the Newtonian value as $c \to \infty$. (I used these calculators for this: relativistic, Newtonian).
The problem: it would seem we even can use this to provide an estimate of $c$. Given knowledge of the initial kinetic energies, as well as how far the receiver is from the emitter, we can see which value of $c$ leads to the measured time differential. Basically, we are looking at the relationship between change in kinetic energy and change in apparent velocity, and measuring "how non-Newtonian" it is. But the problem is: we can rotate the setup and try again, and see if things are isotropic or not. If we measure something different in different directions, we can see which value of $c$ gives the measured result in each direction. Now we've measured the one-way speed of light in each direction, which is supposed to be impossible.
My question: What is going on? Is there something wrong with this experimental setup? How does this relate to the principle that there is no way to measure the "one-way speed of light," or differentiate between Lorentz's ether theory and SR?
(FWIW: I have searched a bunch of these "one-way speed of light" questions and none of them seem to address this, and frankly this seems much more subtle than most of those anyway.)
EDIT: I think this is related to the "Bertozzi experiments": https://en.wikipedia.org/wiki/Tests_of_relativistic_energy_and_momentum#Bertozzi_experiment
So does look like the suggested experimental setup is feasible, and something like it has even been done to calibrate particle accelerators (e.g. determine how much energy is needed to accelerate an electron to some speed). The "Bertozzi experiments" basically measure this relationship between kinetic energy and velocity. I guess I'm just wondering what is wrong with the naive idea to do this kind of experiment, then rotate, and do it again to measure anisotropy. Basically, what we are really measuring is the relationship between change in kinetic energy and change in apparent velocity for massive particles, and seeing if this relationship is the same in all directions; it doesn't seem like there are any clocks that need to be synchronized or any round-trip that needs to happen at all since we are only very indirectly measuring the speed of light via this particular relativistic effect. I mean, we don't even need to assume anything about the exact model that lets you infer $c$ from the time differential, since it may not even be philosophically valid to use the relativistic KE equation with anisotropic $c$. All we need to do is assume that some relativistic effect exists that becomes relevant at higher kinetic energies, and which would affect the time differential in some unknown way which would depend on the value of $c$, and then see if we measure the same time differential after rotating in different directions.
EDIT 2: this answer gives an interesting summary of Reichenbach's research on this. Basically, there is (what the author terms) a "conspiratorial anisotropy" effect that takes place, where all kinds of physical quantities distort in very strange ways as a result of changing the one-way speed of light such that the experimental results are always the same. It makes sense in principle; I guess I'm just curious regarding the details of "how the magic happens" in this particular situation. I deliberately chose a very simple experimental setup that avoids having to synchronize multiple clocks (or at least I think, anyway), and just looks at this KE-velocity relationship in different directions, and I am curious what kind of bizarre stuff has to happen to somehow make this all appear isotropic at the end.