You really shouldn't rush decisions like this. Time keeping is arguably the oldest branch of science, and it usually takes a long time for any changes to the way that people keep time to be decided upon and to be implemented. When the process is rushed, the results may be... undesirable. Bear in mind that time keeping has legal ramifications, so it requires political consensus as well as scientific and engineering considerations.
It can be difficult to reverse bad time-related decisions, or to make substantial changes to time keeping systems. Eg, we still use the base 60 system of subdividing time that we inherited from the Babylonians. (The French did briefly try a metric version of time, but it was a disaster).
The recent history of precision chronology has been a litany of committees that fail to reach decisions, with poor communication and misunderstanding between various involved parties, punctuated by hasty decision making, and technical details hidden by paywalls, particularly in regard to leap seconds. (This has led to the embarrassing situation that the POSIX standard is internally inconsistent regarding leap seconds).
For a summary of the sordid details, please see A brief history of time scales, by Steve Allen of the Lick Observatory. He has more info about issues related to time keeping (especially leap seconds) here.
As for creating a new second standard, we first need to decide which transition to use. It would be unfortunate to make a choice based on some current cutting-edge clock if there's a better design just around the corner. It's not just about having high precision: the design also needs to be reasonably robust, so that it's not too hard to reproduce. We need a global network of clocks; a single master time keeper is not sufficient. This gets tricky with extremely precise clocks because such clocks are highly sensitive to various relativistic effects. The current atomic clock network has to take clock altitude and latitude into account. Optical clocks are much more sensitive to altitude, and a global optical clock network would likely take the variations due to the Moon's gravitational potential into account explicitly.
There's some talk about using a nuclear isomer transition for time keeping: a nuclear clock, but this is still a very experimental technique. In theory, a nuclear clock is more robust, since nuclear transitions are less sensitive to the temperature and to stray electromagnetic fields than electronic transitions are.
FWIW, here's a table of atomic clock precision that I compiled for an earlier answer, with data courtesy of Wikipedia.
Atom |
Type |
Uncertainty |
Cs-133 |
Beam |
1e-13 |
Rb-87 |
Beam |
1e-12 |
H-1 |
Beam |
1e-15 |
Cs-133 |
Fountain |
1e-16 |
Sr-87 |
Lattice |
1e-17 |
Mg+Al |
Lattice |
8.6e-18 |
Yb-177 |
Lattice |
1.6e-18 |
Al+ |
Lattice |
9.4e-19 |
Sr-87 |
Fermi gas |
2.5e-19 |
"Beam" refers to a standard off-the-shelf beam maser. "Fountain" is an atomic fountain, that value is for NIST-F2. "Lattice" is an optical lattice. "Fermi gas" is a 3D quantum gas optical lattice.
NIST-F1 (also an atomic fountain) has an uncertainty around 5e-16. Together, NIST-F1 and NIST-F2 form the primary time & frequency reference for the USA.