Sure, there are plenty of potential applications of higher category theory to machine learning and deep learning. Such applications are still in their infancy, so don't expect a new algorithm, getting better results in faster time, in the immediate future. If that's your goal, then I think Mirco's answer is great (simplicial neural networks) and it's the one I would have given. Since he's already done so, let me instead sketch some higher level, conceptual connections between higher category and machine learning.
To understand such connections, it's helpful to zoom way out and focus on the forest instead of the trees. What is machine learning really about?
- Data, since the model needs to train on data.
- Iterating towards a good model.
- Building up complexity from simple pieces.
Towards (1), category theory has been proposed as an engine for the study of Databases. I learned this from David Spivak's work, and wrote about it previously in this MO answer (which has tons of applications of category theory in the sciences). The idea is that a category can be a model for a database, and you can use category theory to enforce constraints, to make sure your database doesn't have bad data that could break the machine learning algorithm. In my book on Data Systems, we had three chapters devoted to databases and three chapters devoted to constraints. Similarly, you can use category theory to enforce constraints on the types of solutions your machine learning algorithm will come up with. Spivak has also written papers about the uses of category theory in experimental design (also summarized in the MO answer I linked to above), which matters if you want to be sure you get good data on which to build your model. You probably know that Google is running experiments on users all the time. Experimental design still matters. You might also be interested in Spivak's current work, with the Topos Institute, which uses (higher) category theory on all sorts of modern, real-world issues, including Artificial Intelligence.
Towards (2), iterating towards success makes a machine learning algorithm an evolutive system, and Spivak has studied those, too, and I wrote about them in that linked MO answer. This also brings in an area where $\infty$-categories can play a role. In any dynamical system, we care about the time axis. Often, we want to study aspects of the system that are invariant under reparameterizing time, e.g., the long-term tendancy towards equilibrium. When you start considering all possible ways to reparameterize time, you're doing homotopy theory, and $\infty$-categories can appear. There have been many papers studying the homotopy theory of dynamical systems: check out work of Bubenik, Jardine, Gaucher, and Sanjeevi Krishnan, among others.
Towards (3), this is a fundamental use of category theory, via colimits. The mindset of category theory could potentially be useful to one of the fundamental issues of deep learning, which is to understand the inner workings of a neural network and why it comes up with the answer it does. You could even imagine viewing the neural network as a functor, from data to models, and break it down using functor calculus. There are SO MANY ways to apply category theory to these questions, and I expect to see plenty of such papers in the years to come.
Here are some other, more concrete, connections between category theory and deep learning:
- Uniform Manifold Approximation and Projection (UMAP)
- Causal-net condensation
- Topological Deep Learning, which uses topological data analysis. The papers that put this on firm theoretical foundation often use model categories and $\infty$-categories. Again, see work of Bubenik, Krishnan, and others. See also the following papers:
A Survey of Topological Machine Learning Methods, Topological data analysis and machine learning, Topological Data Analysis and Machine Learning Theory. And, for the $\infty$-category connection, see: Abstract homotopy theory for topological data analysis, Homotopy Theory and TDA with a View Towards Category Theory, and work of Jardine.
I'll add more if I think of any. The take-away is that there are tons of potential connections and I encourage you to investigate any that seem interesting to you!