8

I'm a programmer. I code in C++, C#, HTML5, and PHP. There are many graphics engines I have at my disposal. The question is: Does there exist a graphics engine that is as true to our reality as possible given our current understanding of physics? For instance, I can easily create macroscopic objects in a 3D space but what about all of the elements in reality that make up these macroscopic objects? What if for instance I wanted to start from the bottom up, creating simulations at the planck scale, particles, atomic structures, cells, microbiology, etc.? What if I want to simulate quantum mechanics? Of course I can make a model of an atom but it winds up not being correct in terms of being exactly analogous to real life.

I would like to correctly simulate these structures and their behaviors. Assume also that I have access to an immense amount of parallel computing processing power (I do). Perhaps some non-gaming scientific graphics engines exist that I'm not aware of.

xendi
  • 237
  • 2
    It may not be quite what you're looking for, but there's a whole field called "physically based rendering" that tries to simulate the interaction between light and objects as accurately as possible. There are a couple of open-source renderers, called luxrender and mitsuba, plus some commercial offerings, though I haven't tried any of these myself. – N. Virgo Mar 23 '15 at 08:15
  • 5
    I have access to some very powerful supercomputers (including Oak Ridge's Jaguar). When performing quantum mechanical calculations, I cannot reasonably exceed $O[10^4]$ atoms. But even that involves employing myriad approximations: linear-scaling DFT, pseudopotentials, and so on. Furthermore, such large calculations are pretty much limited to single snapshot energy calculations rather than trajectories. – lemon Mar 23 '15 at 08:17
  • Related: http://physics.stackexchange.com/q/8895/2451 , http://physics.stackexchange.com/q/90127/2451 and links therein. – Qmechanic Mar 23 '15 at 14:50
  • @lemon as indicated in my answer, 1E4 is approximately the number of atoms in a single molecule of haemoglobin. But I assume your simulations involve simpler / more homogenous systems. It would be interesting to see an example of the kind of complexity you mean by 1E4 atoms. – Level River St Mar 23 '15 at 15:37
  • A scientific physics graphics engine? Assuming you render an atom to make it look the size of a blueberry on your computer screen, and assuming that atom is part of a larger cell, how big a computer screen would you need to "correctly simulate" graphically that entire cell? – Stephan Branczyk Mar 23 '15 at 18:59

3 Answers3

28

Assume also that I have access to an immense amount of parallel computing processing power (I do).

Unless you are an important person in the Chinese computational science world (using Tianhe-2), or you have access to secret government computers us mere mortals don't know exist (so they don't appear in rankings of the best supercomputers in the world), I probably have access to more ;) And I can't even imagine tackling one billionth of one billionth of the problem you want to tackle. In fact, I'm certain the combined efforts of every computer on the planet, secret or not, could not begin to approach the problem you want to solve.

You very much overestimate the amount of computing power available on Earth. To connect atomic scales to macroscopic ones, a useful number to have in mind is Avogadro's number, about $6\times10^{23}$. That's how many atoms there are in a few grams of typical materials. Going back to that top computer in the world, it has about $1.5\times10^{15}$ bytes of memory. That is, even at one byte per atom, you couldn't store enough information in memory to represent a speck of dust. And at less than $10^{17}$ floating point operations per second, it would take eons to do any useful calculation on even that reduced dataset.

Add in the Planck scale being a good $20$ to $25$ orders of magnitude smaller than the atomic scale, and the problem becomes mind-bogglingly overwhelming. Note there are only about $10^{50}$ atoms on Earth, so turning the entire planet into a single computer, composed of science-fiction-level single-atom transistors, would still fall short of being able to render macroscopic things exactly from such first principles.

I would like to correctly simulate these structures and their behaviors.

A laudable goal, and one shared by many physicists. However, a key ingredient in physics is knowing what approximations to make so as to make a problem tractable. This applies to computation as much as anything else. In particular, physics often divides up large, complicated systems into components with simple rules, whose behavior is inferred from dividing up examples of them into smaller components.

More clearly, a hypothetical example. If you want to model a person, you have components at the organ level, like blood and skin. Each one has reasonably simple, aggregate behavior that is an approximation to its "true" underlying, fundamental nature (but often a very good approximation!). One know the behavior of blood because you do simulations of ideal, continuum fluids with similar viscosities. You know how viscosity works from empirical experiment, but if you want to simulate it you could make a simulation of a small patch of approximate water molecules imbued with electrostatic interactions. These electrostatic interactions can come from molecular dynamics simulations, and so on.

Trying to solve everything from first principles is futile. Instead, we study reasonable models of systems and apply what we learn1 to other models.


1 In fact, I posit that if all you do is simulate nature as closely as possible, you have done nothing at all. You could have just let nature run its course. Simulation is only valuable as a tool, like experiment or theory, for gaining new insights into how nature works.

  • 2
    “a key ingredient in physics is knowing what approximations to make” – indeed you could go as far as saying this is what physics is all about: no physical theory is “true reality”, they're all just models of reality which happen to agree with certain experiments (but might always fail for some future experiment). – leftaroundabout Mar 23 '15 at 16:25
  • TL;DR version: Nature is the ultimate computer. Human built computers don't even come close to coming close. And simulating it exactly doesn't really help us solve any problems. – jpmc26 Mar 23 '15 at 21:54
  • This answer assumes I'm trying to start out simulating large structures and that I don't have much computing power when I have a giant parallel supercomputer at my disposal. The comments are by people who obviously don't know much about computer science and the gains that have been made simulating functions and structures found in nature. – xendi Mar 25 '15 at 21:34
1

As a chemist turned engineer, I think I am well placed to answer this question.

Does there exist a graphics engine that is as true to our reality as possible given our current understanding of physics?

Given appropriate constraints and simplifications, it is possible to build a useful model from simple elements. Whether you consider this "true to reality" is open to interpretation.

There are three main simulation areas that spring to mind in mechanical engineering: finite element analysis, structural analysis, and computational flow dynamics.

Finite element analysis means taking a solid body and breaking it down into tetrahedral or cubic elements and applying the laws of physics (normally just stress-strain relationship) to each one. To get a really good result on a relatively small object, you're going to need about 1000 elements in each of 3 dimensions, so that's a gigabyte of memory assuming one byte per element (in reality each element will need at least ten times that to store, as a minimum, its 6 degrees of translational and rotational freedom.) This is already looking like a memory issue for regular PC. By increasing the mesh size a bit, we can run such a simulation on a PC, but it may take several hours for the effects of even a static load to propagate through the model to convergence. Modeling oscillation (time + 3 spatial dimensions) is pretty much impossible on a PC, both in terms of the time taken and the amount of data generated (several gigabytes per timestep.) Reducing to time + 2 spatial dimensions helps a lot.

In order to make the calculations reasonable, civil and structural engineers use a simplifictation to perform structural analysis. Programs like Staad Pro work with elements such as beams and columns, assuming they will bend according to known models. The engineer builds meccano-like model for the program input, specifying the nodes where the beams connect, indicating whether the joint is fixed or free rotation is possible, etc. In this way, full four dimensional (time+3 space) analysis is possible.

Computational fluid dynamics is the equivalent of finite element analysis, but for fluids rather than solids. Again we use a mesh of cubes or tetrahedra to represent the volume, but there are different issues. This is the type of simulation I have personal experience of, using Floworks software, which uses a cubic mesh and very usefully allows you to reduce the mesh scale to a half or a quarter of the main mesh in critical areas. Nevertheless, the experience has led me to believe that you can predict anything you want with computational fluid dynamics software. I see it as a useful qualitative tool for identifying problem areas, rather than a means of quantitatively predicting pressure drop vs velocity.

Again we need about a billion elements for a really good simulation of a small object in 3 spatial dimensions, with at least pressure and three degrees of freedom in velocity for each element. Again, predicting flow in three spatial dimensions plus time uses excessive computing power. But unfortunately in the case of computational flow dynamics, a system with stable inlet and outlet flow can very likely to have an oscillation somewhere inside the model, which is not the case of finite element analysis under constant load. Sometimes we can simplify to 2 spatial dimensions. A 2 spatial dimension + time analysis of a cross section of a chimeney with wind blowing across it can be done, and it may reveal that the system oscillates due to vortex shedding, which apart from the cyclic stress placed on the chimeney, results in greater drag than would be seen with a time-averaged model. The equations used are called the navier-stokes equations, and though very simple in concept, can lead to surprisingly complex results if turbulence ensues (google Reynolds number for more info.) It isn't really possible to extend the calculation into 4 dimensions on a PC, so approximations have to be made to account for the effect of turbulence.

In my field (combustion and heat transfer) the burner manufacturers introduce some simple combustion thermochemistry into their models. That adds another level of complexity which means a pretty powerful computer is needed.

So good luck, go ahead and perform a computational fluid dynamics simulation with a 1000x1000x1000 mesh for 1000 timesteps and you will generate several terabytes of data. Don't forget that each iteration will need to converge properly before you continue to the next timestep. Interpreting all that data is another issue. Do this every day for a year and you will have several petabytes. Do you have that much storage? You will quickly see why engineers prefer to use the relationship between Reynolds number and Friction factor rather than use the Navier-Stokes equations to calculate everything from first principles.

What if for instance I wanted to start from the bottom up, creating simulations at the planck scale, particles, atomic structures, cells, microbiology, etc.?

Whoa! that really is a lot of computing power. A molecule of haemoglobin weighs about 64000 daltons (about the same as 64000 hydrogen atoms.) A dalton is 1.66E-24 g, the reciprocal of Avogradro's number. haemoglobin is an interesting protein because it has four separate binding sites for oxygen, and binding of one oxygen causes a change in conformation than enhances the binding strength of the others; it's a kind of natural molecular machine (which is independent of the rather more complex molecular machines like ribosomes and cell membranes.)

I don't have the exact molecular formula for Haemoglobin handy, but let's make some assumptions. Let's assume the number of protons and neutrons is equal. That means there are 32000 protons and 32000 electrons (I'm not really interested in the protons, we'll stay away from nuclear physics, but it's the best way to get an idea of the number of electrons. The average atomic mass will be somewhat similar to glucose: about 7.5. In round figures, let's say there are 10000 atoms. In addition, proteins keep their shape due to being surrounded by solvent, so let's say we need to multiply both those numbers by 10: thats 3200000 electrons and 100000 atoms.

Now you might hope to make some simplifications regarding the charge interactions and be able to predict the shape of the molecule, and maybe even the binding of O2 (though that is rather dependent on the iron atom where the O2 is bound, so you migh prefer to rely on known data for that.) This type of thing is indeed done and is one way of trying to find suitable drug molecules that bind to receptors. But when I was in the industry 15 years ago, it was far more fashionable to use automation to physically synthesize and screen vast numbers of potential drug substances.

Actually trying to make a quantum model of this from first principles would be vastly more complex, not least because quantum mechanics is statistical, so it would probably require a Monte Carlo computational approach: you might have to consider hundreds of molecules. Indeed in the comments, @lemon states that 1E4 molecules is about as far as he/she can go, and probably with substances a lot simpler than hemoglobin. I think it would be an achievement just to accurately model the binding site containing the iron with quantum mechanics.

To put this in perspective, let's see how many atoms there are in a single bit of memory. according to wikipedia, 128GByte is now possible, which is 1 terabit (this is DRAM so these are capacitors, not transistors.) Let's assume the die weighs 0.028g and therefore contains a millimole of silicon. So we have 6.28E23atoms/mol * 0.001mol=6.28E20 atoms to store 1E12 bits. That's 1E8 atoms per bit. Once you start thinking about the number of bits you need to represent an electron, you realise you need megatonnes of silicon just to do the most basic simulations of a milligram of matter.

0

The scales and scopes of the models we do have are far larger than cellular level. Further, while there are skeletal models for animal bodies, the models for their motion is very much top-down modelling rather than bottom-up modelling. That is, an actual human's motions will be recorded and interpolated into the model, or an animator will pose the model in keyframes and the software will interpolate how to move the body from keyframe to keyframe. There are some cases where some level of bottom-up is done, such as in ragdoll modelling of bodies falling and colliding with environments.

In a real animal body, every motion has some force input to more or less every part of the body. Even a full muscular model of a leg, let alone a hand, would be amazingly complex, and the number of possible nervous system inputs to it would be enormous, and of limited use without the rest of the body, the circumstances, the mind... there's essentially no way to really model animal motion truly from the bottom up.

Dronz
  • 159