A few weeks ago, I gave a talk over some fundamentals of stochastic geometry as a part of the DRP at my university. Specifically I talked about various spatial point processes and a particular model called the Boolean model, which is regarded as the bread and butter of the entire field. Here, I will try to incorporate as much as I was able to during my chalk talk, as well as include some nice visualization tools to hopefully help make the material a bit easier to digest.

So what exactly is a point process? Formally, **a spatial point process (p.p.), Φ, is a random, finite or countably infinite collection of points in the space R^{d}, without accumulation points** (which just means the process isn’t converging to the origin, and there aren’t infinitely many points as you get closer and closer). We can think of Φ as the sum of Dirac measures (a measure being a generalization of physical concepts which we are familiar with, like area, volume, and length; it’s a way to size objects), which are measures of a size of 1 – it assigns size based solely on whether it contains a fixed point,

*x*, or not, and is one way of formalizing the Dirac delta distribution. That is,

I like to just think of a p.p. as simply a collection of points in a set, since Φ is a random counting measure: Φ(A) = *n* where *n* is the total number of points generated.

Now, one point process I focused on in particular is the Poisson p.p. It’s the simplest one there is, and we can construct it as follows: a specific Φ, with intensity measure Λ (which we can think of as how often/how fast these points will appear in the space; if you are familiar with 1D stochastic processes, consider Λ (d*x*) = λ d*x*, which is a homogenous Poisson p.p. with a constant rate of arrivals/events) where Λ is,

is characterized by its family of finite dimensional distributions

If these sets, {A_{k}}, are disjoint, then this is Poisson, thus implying another characterization: that of **independence**. We can see this with the RHS of the equation, where all the individual probabilities are simply multiplied together. I like to think about this in a physical sense: say we have an ensemble of particles, all prepared in the same manner, and we want to look at some statistical value of interest. If we assume non-relativistic (and a few other) interactions, then the behavior of one particle is independent of another. In my previous post, I created a simulation of a Poisson p.p. for a unit square:

Another method of characterizing the Poisson p.p. is by using some ideas taken from Palm theory, in particular, by making use of the reduced Palm distribution, P^{!}_{x}. **What this does is condition on a point being at a particular location (denoted by P _{x}), and then you remove this point from Φ** (this is denoted by the exclamation point), and then look at the resulting distribution. For general p.p., that may have attractive or repulsive forces between the points, you can see how conditioning on a point being at a location would affect the overall behavior of points within some finite distance of the event. Further, removing this point from the point process would then result in a different distribution. Because a Poisson p.p. has no interaction between the points, we have an explicit relationship between the reduced Palm distribution and the original distribution. In fact,

This is **Slivnyak’s Theorem**, and I just think it’s really really cool! It’s remarkable, because I’m not sure that with any general p.p. we could get a relationship like this that relates these two distributions; the Poisson p.p. is quite special.

Let’s leave the Poisson p.p. for a moment and talk about a possible property of any p.p.: that of stationarity. Say we have a p.p., Φ. Recall our previous definition of Φ as being the sum of Dirac measures,

Now, let’s add a vector:

This p.p. is **stationary** iff:

That is to say, that the p.p. is invariant under translation. This idea will be helpful to us later on. For now, let’s move on to another type of p.p.; consider we now attach some piece of information (in *R ^{l}*) to each point (which exists in

*R*) in the process Φ. We call this

^{d}**a marked p.p., which is a locally finite, random set of points with some random vector attached to each point**, and is denoted with a similar notation:

Marked p.p. are important in their own right, but I’m just gonna be using it to talk about the basis of the Boolean model. In particular, it’s based on a Poisson p.p., whose points in *R ^{d}* are called germs, and an independent sequence of i.i.d. compact sets called grains,

You can see how this underlying p.p. mirrors our description of a marked p.p.. However, for the latter we only considered a vector, *m _{i}*, in

*R*. To deal with more general mark spaces, we can think of Ξ

^{l}*subsets as being picked from a family of closed sets (where*

_{i}*m*acts like a random radius),

**The associated Boolean model is the union of all grains shifted to the germs**. That is, the set-theoretic union of all disks centered at each point generated in the underlying Poisson p.p.,

Now, the easiest Boolean model to work with is one that is homogenous: we say that Ξ_{BM} is homogenous if the underlying Poisson p.p. is stationary and Λ (d*x*) = λ d*x*. So stationarity is a characteristic of the homogenous BM.

The Boolean model (and also more general germ-grain models) are used in a variety of applications: from telecommunications, to galaxy clustering, and even to DNA sequencing; it is a widely applicable model to real world pattern formations and is surprisingly accurate given how simple it is.

This is just meant as a simple introduction to the topic. In later posts, I hope to talk about some applications of the Boolean model (and other related concepts) to statistical physics. Anyway, I hope y’all find this stuff just as interesting as I do! And, like always, thanks for reading!