Let $L$ be the distance from the double slit to a screen, $S$ be the distance from a source slit to a double slit, and $s$ be the width of the source slit.
The interference pattern produced by a source point is like the cross section of a Fresnel diffraction pattern: the fringes get closer together as the distance from the center increases. The whole pattern scales up linearly as the ratio $L/S$ of the distance from double slit to screen and source slit to double slit increases. Also, if the source point is moved laterally by a distance $x$, the interference pattern moves laterally (in the opposite direction) by a distance $X$ where $$X = x L/S.$$
If two such interference patterns are superimposed but offset by a distance $X/2$, the result will be a pattern in which the fringes separated by a distance $X$ are completely blurred. The fringes whose separation is larger will be blurred slightly, but will still be visible.
If the source is a slit of width $s$, then it is effectively a continuum of point sources across the slit. It includes points with separations ranging from $0$ to $s$, so all features in the resulting diffraction pattern corresponding to fringe separations smaller than $X$ will be blurred severely.
If $X/2$ is equal to or greater than the spacing between the central fringes, then even the central fringes are blurred. So, if the source slit is too wide, there is no fringe visibility at all.
This is the basis of stellar interferometry: the angular width of a star can be measured by the width of the non-blurred portion of a diffraction pattern formed with light from the full width of the star. If you want to know the math in detail, this is a good source.