Chances are, you've already seen it happen in your past studies of math and just never heard it called that. In the following, I'm going to try to give an example to motivate the answer that it's not just possible, but quite ordinary, without going into rigorous detail.
The first thing to realize is that ordinary real- or complex- valued functions over the same domain form a vector space already. A vector space is simply a collection of objects that can be added together and scaled thought multiplication by suitable numbers (a field). The full definition is somewhat more involved than that, and can be found e.g., here, but it's obvious that one can add these functions and multiply them by scalars, and the other defining properties of "vector space" also hold (e.g., $\vec{0}$ is the function that maps everything in the domain to $0$).
However, what we want is not just a vector space, but also to have an inner product generalizing the role of the dot product of vectors in $\mathbb{R}^n$. There are infinitely many choices here, but one of the more generally useful ones is
$$\langle f|g\rangle = \int f^*(x)g(x)\text{,}$$
with the integral over the entire domain and $^*$ representing complex conjugation. This is an obvious generalization of the Euclidean dot product
$$\vec{u}\cdot\vec{v} = \sum_k u_k v_k\text{,}$$
adjusted to handle complex-valued functions to ensure that $\langle f|f\rangle\geq 0$, i.e., that vectors have non-negative norm-squared. Recall that in Euclidean space, for a unit vector $\hat{u}$, the dot product $\hat{u}\cdot\vec{v}$ is the component of $\vec{v}$ along $\hat{u}$, so that the projection of $\vec{v}$ onto $\hat{u}$ must be $\hat{u}(\hat{u}\cdot\vec{v})$. Thus for an arbitrary vectors, not necessarily normalized,
$$\text{Projection of $\vec{v}$ onto $\vec{u}$} = \vec{u}\frac{\vec{u}\cdot\vec{v}}{\vec{u}\cdot\vec{u}}\text{,}$$
so given an orthogonal basis $\{\vec{e}_k\}$, we can write any vector in terms of components in that basis:
$$\vec{v} = \sum \vec{e}_k\frac{\vec{e}_k\cdot\vec{v}}{\vec{e}_k\cdot\vec{e}_k}\text{.}$$
Ok, but what about functions? Does it make sense? Can we actually write something like:
$$f(x) = \sum g_k(x)\frac{\langle g_k|f\rangle}{\langle g_k|g_k\rangle}$$
for a "basis" of $\{g_k(x)\}$ using that integral as an inner product to replace the dot product?
As a simple example, suppose we're dealing with real functions on $[-\pi,\pi]$, and I define the functions
$$c_n(x) = \cos(nx),\;\;\;n\geq 0\\ s_n(x) = \sin(nx),\;\;\;n > 0$$
These vectors are orthogonal: for $n\neq m,$
$$\int_{-\pi}^\pi c_n c_m\,\mathrm{d}x = \int_{-\pi}^\pi s_n s_m \,\mathrm{d}x = \int_{-\pi}^\pi c_n s_m \,\mathrm{d}x = 0 = \int_{-\pi}^\pi c_n s_n\,\mathrm{d}x\text{.}$$
Although it is not immediately obvious, they also form a basis: given $f:[-\pi,\pi]\rightarrow\mathbb{R}$, one can write
$$f(x) = \sum_{k = 0}^\infty c_n(x)\frac{\langle c_n|f\rangle}{\langle c_n|c_n\rangle} + \sum_{k=1}^\infty s_n(x)\frac{\langle s_n|f\rangle}{\langle s_n|s_n\rangle}$$
And all I've done is written the standard Fourier series in vector notation, since if you actually do the integrals, then $\langle c_0|c_0\rangle = 2\pi$ and $\langle c_n|c_n\rangle = \langle s_n|s_n\rangle = \pi$ for $n>0$, while the numerators turn into the usual Fourier coefficients.
In other words, the Fourier series writes a function over a finite interval in terms of a particular countably infinite orthogonal basis $\{1,\cos(nx),\sin(nx): n>0\}$. Similarly, a Fourier transform can be thought of as writing a function in terms of an uncountably infinite orthogonal basis, so sums are replaced with integrals.
However, this means that both $f(x)$ and the list of Fourier coefficients give your the same information: they're just different representations of the same mathematical object, the vector. Therefore, as nervxxx notes, we can consider $f(x)$ to simply be some vector written in the position basis (an uncountably infinite basis) and don't consider the function as the fundamental object.
(N.B. there's nothing particularly special about the Fourier basis; many other choices are possible. Also, Hilbert spaces require a bit more than the mere existence of an inner product, but that turns out to hold here as well.)