Introducing the dual space of linear maps allows you to work with co- and contra-variant indices even without a metric being defined. As Charles Francis answered earlier, in that case column and row vectors are a nice way to think about things.
On the other hand, as it seems you may have noticed, in a metric space with inner product there is really no need to introduce the dual space. (Basically due to the canonical isomorphism between an inner product space and its dual.)
For instance, consider a vector space $V$ with a basis $e_i$, so an arbitrary vector in components is $a = a^i e_i$ with real scalar components $a^i$. Suppose there is a dot (inner) product on this space, written $a \cdot b$ for vectors $a,b$. The metric coefficients $$g_{ij} = e_i \cdot e_j$$ are the dot products of the basis elements. By definition of the inner product, the matrix of coefficients $g_{ij}$ is invertible, with matrix inverse $g^{ij}$. Expanding vectors $a,b$ in terms of coefficients, using linearity of the metric, one then has $$ a \cdot b = a^i \, b^j \, g_{ij}$$ as usual.
Now, this is where I will differ from the standard by not introducing a dual space.
Theorem. There exists a basis $e^i$ (note the upper index distinguishes this from the old basis $e_i$) of vectors in $V$ such that
$$ e^i \cdot e_j = \delta^i_j . $$
In particular, $e^i = g^{ij}e_j$. We call $e_i$ and $e^i$ a pair of reciprocal vector bases.
Every basis has a reciprocal basis. There is no such thing as a reciprocal vector to an individual vector. Which basis set has the upper vs lower index is unimportant, they are both sets of regular old vectors.
Now the vector $a = a^i \, e_i = a_i \, e^i$ can be expanded equally well in components (given by $a^i = a \cdot e^i$) or reciprocal components (given by $a_i = a \cdot e_i$). Consequently, the inner product evaluates to
$$ a \cdot b = a^i \, b_j \, (e^i \cdot e_j) = a^i b_i .$$
By now hopefully you can see that this will completely recreate all benefits of introducing the dual space, but while working entirely with vectors. Personally I find this formalism very useful and intuitive --- but unfortunately it is not standard in the literature. That's a pity, because, there is always a metric in GR, so this way of doing things can provide a lot of simplifications.
One example of a fun fact when you translate this approach into GR: The reciprocal basis to the coordinate basis fields $\partial/\partial x^i$ is the precisely the set of vector fields which are gradients $\nabla x^i$ of the coordinate functions $x^i$ --- these gradients correspond to the one-forms usually called $dx^i$.
In summary: If there is no inner product (aka metric) you can think of column and row tuples. If there is a metric, you only need to think about vectors (as directed arrows), and can think of the co- and contra-variant versions as two different basis representations of the same vector.