science & nature

Symmetries: Why Number Theory is Kind of Important

During my time as a physics student, I have often heard others in the department lament the fact that they have to take certain proof based upper division math courses for an elective credit. Among those, number theory is a popular course that physics students can take to fulfill this.

However, many don’t find it particularly enjoyable; a lot of physics students are concerned with the utility of a mathematical idea, which makes sense given the discipline, & are convinced number theory has no use in application. I think this shortsightedness is unfortunate, but easily correctable: just shed some light on a common application of ideas from number theory that are prominently used in physics! That application? Group theory, which will be the content of this post.

What is a Group?

Group theory is a fancy way of describing the analysis of symmetries. A symmetry, in the mathematical sense, refers to a system which remains invariant under certain transformations. These could be geometric objects, functions, sets, or any other type of system – physical or otherwise.

Physicists typically don’t need to be told how group theory is useful, they already know this to be the case. But a lot of undergrads are not able to properly study foundational aspects of groups because the degree plan doesn’t emphasize it, even though it underpins some of the most important physical laws – such as conservation principles – that we do, in fact, learn.

The easiest way to think of groups is to think of a set with an operation applied. For this set to indeed be a group, it must fulfill certain properties:

  • It must have a binary operation
  • It must be associative
  • It must contain an identity element
  • It must contain an inverse

Binary Operation

A binary operation, on a set A is a function which sends A × A (the set of all ordered pairs of A) back to A. In less formal terms, it takes in two of the elements from A & returns a single element which also exists in A. Another way to look at this, is that the operation in question has the property of closure.

A good example to think of, is addition of the set of integers: is addition a binary operation?

screen-shot-2017-03-04-at-3-39-54-pm

Some test cases for addition of integers

It would certainly seem to be the case! It is, & we intuitively know this to be true (which is why I am going to skip the proof).

Associativity

Say we have a binary operation on the set A. This operation is said to associative if, for all a, b, cA

(a b) ∗ c  = a ∗ (bc)

If we use our previous example about addition on the set of integers, we can easily see that addition on ℤ is associative. But what about subtraction?

screen-shot-2017-03-04-at-4-14-52-pm

Using Mathematica, we can gauge whether or not two statements are equivalent using the triple ‘=’ symbol; explicitly, the left-hand side is equal to a-b-c, while the right-hand side is equal to a-b+c

So we can confidently say that subtraction on the set of integers is not associative. So while it is a binary operation, <ℤ, -> still fails to be a group.

Identity Element

Suppose again that ∗ is a binary operation on A. An element e of A is said to be an identity for ∗, for all aA if

a ∗ e = e ∗ a = a

Using our handy example of the set of integers, ℤ, we know that <ℤ, +> does have an identity element. Similarly, <ℤ, ×> also has an identity element:

screen-shot-2017-03-04-at-4-23-54-pm

Inverses

Suppose, once more, that ∗ is a binary operation on A, with identity element e. Let aA. An element b of A is said to be an inverse of a wrt ∗ if

a ∗ b = b ∗ a = e

The set of integers does in fact have inverses under addition as well. In fact, we are very familiar with these inverses:

screen-shot-2017-03-04-at-4-29-07-pm

I will let you, the attentive reader, decide for yourself if you think that <ℤ, ×> also contains inverses. What about <ℤ, ->?

Subgroups

It might not seem like it, but subgroups are an important aspect of group theory. But why look at subgroups when you could just look at groups? After all, the term itself implies that it is not even a whole group…surely there must be more information contained in a regular ol’ group? That was a question I had asked myself, too; I just didn’t get the point.

Now though, I realize that you can learn a lot about the structure of a group by analyzing their subgroups. More generally, if you want to understand any class of mathematical structures, it helps to understand how objects in that class relate to one another. With groups, this begs the question: Can I build new groups from old ones? Subgroups help us answer this question.

Suppose G is a group. A subset H of G is called a subgroup of G if

  • H is non-empty
  • H is closed under the operation of G
  • H is closed under taking inverses

Non-empty

All this means is that there must be at least one element in H; it cannot be the empty set, ∅.

Closure

If H is a subgroup of G, we sometimes say that H inherits the operation of G.

Let’s look at this idea with some things we are familiar with. Particularly, let us use our handy dandy knowledge about the set of integers ℤ under addition. We know that a subset of ℤ could be S = {-1, 0, 1}. If we add any of the elements in S together, we will always end up with an element that is also in S.

That is pretty easy to see, so let’s look at a counter-example. Let us have a second subset of ℤ, T = {-3, -2, -1, 0, 1, 2, 3}. At first glance, it would seem like T is closed under addition on ℤ. However, if we add 2 and 3 together, that would result in 5. And we can see that 5 ∉ T.

Therefore, T does not inherit the operation of ℤ.

Inverses…Part Two

If H is a subset of G, then any element within H must have its respective inverse also be in H. We talked about what inverses were a bit earlier, so I am not going to re-type it.

You might be reading this and be thinking, “But wait! Shouldn’t a subgroup also contain the identity element as well? Silly Jesse…you buffoon…” Have no fear! For the existence of an identity element in H actually follows from the existence of inverses. I won’t prove it, but please trust me…

Special Groups and Their Properties

There are certain groups in which interesting patterns crop up. This makes them stand out amongst other groups, thus they demand special attention be paid to them. One such group is called a cyclic group.

Let G be a group with aG. The subgroup <a> is called the cyclic subgroup generated by a. If this subgroup contains all the elements of G, then G is also cyclic. You can inherently see the usefulness of subgroups in full effect here: the ability to understand more about a group can come from the existence of certain subgroups.

But what exactly does it mean for a group to be generated by an element a? In short, it means that all the elements of a group can be represented as a multiplier of a single element of that group.

More formally, ∃ m ∈ ℤ such that G = {a}, or in additive notation, G = {ma} .

Let’s take a look at this idea in the context of an example: Take the group <ℤ8, +> that is, {0, 1, 2, 3, 4, 5, 6, 7} under the operation of addition.

Screen Shot 2017-03-06 at 3.54.19 PM

Additive table of integers modulo (order) 8 & associated generators of the group. Elements {1, 3, 5, 7} can be multiplied by some m (mod 8) that result in the creation of every other element in the group.

Think about it: if we took m × 2, we would only end up with multiples of 2 – {0, 2, 4, 6} – which does not account for every element in ℤ8. Moreover, the same can be said about elements 4 and 6 – they do not generate every other element in the group.

Why are the generators the ones that they are? What is so special about the relationship between the generating elements of a group & the order of the group? Well, much to the dismay of physics students everywhere, the result is pretty interesting…

Relation to Number Theory

While there are multiple connections between algebraic structures – such as groups – to number theory, perhaps the most useful one comes from ideas related to the division of numbers. In particular, the notion of relative primality is especially useful for understanding more about behavior of cyclic groups.

Recall that m divides n (n | m) if there exists a k such that n = mk. Also recall, that an integer p is prime if p has exactly two divisors: p & 1.

Now, if both n & m are not zero, then ∃ d ∈ ℤ+ for which d is the greatest common divisor of n & m:  gcd(n, m) = d.

When d is equal to 1, we say that n & m are relatively prime. Meaning, there are no factors shared between n & m, which can divide the both of them, other than 1.

Going back to our previous example of cyclic groups, using <ℤ8, +>, what do we notice about the generators?

Screen Shot 2017-03-06 at 4.32.55 PM

The gcd between the generators of integers modulo 8 & 8 is 1.

More generally, the gcd between the generators of any group, & the order of the group will always be 1. If you did not know what the generators of a cyclic group were, you could find them using this concept.

There is more to be said about the relation between number theory & group theory – such as the use of the division algorithm to prove existence of particular elements in cyclic subgroups – but I feel like I’ve already made a compelling enough case for the utility of number theory.

Like always, thanks for reading!

Advertisements

PostFix Notation and Pure Functions

Hey, y’all! Hope the summer is treating you well! It’s been awhile since my last post, and the reason for that — as some of you may know — is because I have started working full time at internship with Business Laboratory in Houston, TX. I will likely be talking about the projects I’ve been working on in later posts; I’m reluctant to at the moment because I’m trying not to count my chickens too early. I am going to wait until the projects are officially finished for that, but for now, I’d like to talk about a cool method of programming with the Wolfram Language that I am really fond of: the use of PostFix notation and pure functions.

I’m going to assume at least a basic familiarity with Wolfram language (likely some of y’all have stumbled upon Mathematica if you’re a science student, or have used Wolfram|Alpha at some point for a class). That said, pure functions are still kind of mysterious, so let’s talk about them.

 

What the heck is a pure function

Pure functions are a way to refer to functions when you don’t want to assign them a name; they are completely anonymous. What I mean by that is, these functions have arguments we refer to as slots, and these slots can take any argument passed through regardless of whether or not the argument in question is a value or a string, and regardless of what kind of pattern the passed argument follows. To illustrate what I mean, consider:

Screen Shot 2016-08-06 at 7.44.19 PM

As you can see, the functions “f” and “g” don’t care whether what you’re passing  thru it is an undefined variable, a string, or a value. It’ll evaluate it regardless.

If you’re seeing some similarities between Wolfram pure functions and, say, Python λ-functions, no need to get your glasses checked! They are essentially equivalent constructs: in practice, they both allow you to write quick single-use functions that you basically throw away as soon as you’re finished with them. It’s a staple of functional programming.

Screen Shot 2016-08-06 at 7.39.53 PM

A comparison of the traditional way of evaluating functions vs the evaluation of functions with pure functions; the results, as you can see, are identical. Another thing to note is that traditionally, you must set the pattern in the function you are defining, hence the h[x_] — this underscore denotes a specific pattern to be passed as an argument in order for the function to evaluate. Hence, pure functions are much more flexible.

Last note about pure functions: it’s important to remember that when you are writing in the slots, you need to close them with the ampersand (&). This tells Mathematica that this is the end of the pure function you are working with.

 

Ok, but what about PostFix

We are familiar with applying functions to variables. We are taught this in math courses: apply this function, which acts as this verb, to this variable, which can take on this many values. In math terms, when you evaluate a function you are relating a set of inputs to a set of outputs via some specific kind of mapping. For example, if I have:

Screen Shot 2016-08-06 at 8.01.17 PM

What I am saying is my set of inputs is x, and when I apply the function, f(x) = x + 2, to this set of inputs, I get the corresponding set of outputs, f(x).

In Mathematica, this is typically expressed as brackets, [ ] (as you might have noticed). PostFix is the fundamentally the same exact thing. It is a method of applying a function to some kind of argument. Except, with PostFix notation for brackets, expressed as //, you are now tacking the function on at the end of an evaluation. Like so:

Screen Shot 2016-08-06 at 8.07.15 PM

PostFix notation has its pros and cons, and like any good developer, it is up to you to decide when it’s best for you to use this particular application.

 

Putting them together

When these two methods are put together it can make for some rapid development on a project, particularly when you are importing data; you are usually importing something that has a raw, messy format (say coming from an Excel spreadsheet or some other database). PostFix notation allows you to perform operations on this data immediately after import, and pure functions allow you to recursively call the things you did in the previous line of code (just before PostFixing) without rewriting it. Doing this makes debugging a breeze, because you can easily break apart the code if it fails to evaluate in order to see where the failure is occurring. Here’s an example of what I mean:

Screen Shot 2016-08-06 at 8.18.08 PM

A comparison between nesting of functions method that is typical of most Mathematica users and the PostFix with pure functions method. You can see that I’ve included the use of two global variables, because there may be another point in development where you would like to use just the specific pieces of information from previous code, and not the whole thing; e.g. the function DataSet was included solely for visualization purposes, and the pertinent information can be retrieved by just calling the variable csv if it needed to be analyzed separately at another time.

In conclusion

PostFix notation, coupled with the use of pure functions, accelerates your development and will help you get whatever you’re working on — whether it be an app, or some kind of homework problem — to the final stages much faster. It is not intended (by any means) to be a nice, polished piece of code; it is primarily for testing purposes, and I would advise to dress it up a bit before pushing it out for the world to see. Just one person’s opinion of course, you can ultimately do whatever you want. Anyway, I hope this has been somewhat helpful for y’all, and like always: thanks for sticking around!

Adventures with the Raspberry Pi

So awhile ago, I purchased a Raspberry Pi (RPi) model B+. For those of y’all not familiar with what the RPi is, it’s essentially a microcomputer that acts as an educational tool designed to teach basic software and hardware to those of us who are a little bit new to tech. You can use it a multitude of ways, the way in which I use it is for running small computational mathematics projects that I don’t want to run on my laptop. A lot of people also use it as a mini desktop. If you’re after more info about the capabilities of the RPi, check out the hyperlink up at the top of this post! I’ll mostly be talking about my experiences using it for independent projects. Anyway, so to get started with the RPi, you need a few things. Specifically with the B+ model, it’s good to have these things:

  • HDMI cable or VGA/HDMI converter
  • ethernet chord
  • mouse
  • micro charger
  • monitor

You will also need a micro SD card, so you can download the OS to it (you can also buy the pre packaged OS micro SD card, but a lot of people have had problems with the OS not being packaged properly).

For me, it took a little while to get up and running. This was because I didn’t know beforehand that, when downloading the OS, you need to remove all the individual files from the downloaded folder before copying them to the empty micro SD card. I also ran into the issue of trying to use my old monitor, with which I was going the VGA/HDMI route; as it turns out, the default frequency setting for the RPi is pretty high, in fact it’s too high for the settings of my old monitor. It’s possible to overclock the RPi in the config.txt file (and in fact, here’s a good description as to what’s what when doing just that) but that totally defeats the purpose of trying to run efficient computational projects, since it requires more power when overclocking and the VGA/HDMI converter already takes up a significant amount of power.

I still think going the VGA/HDMI route is a good alternative, and I would like to be able to do that once I figure out how to get around it. Until then, I started using my TV monitor for this purpose, since it only requires an HDMI chord. Here’s what the set up looked like:

rpi_setuprpi_setup2

Please excuse the clutter, I am a simple college student trying to make due with a small space! As you can probably see, I was evaluating and simplifying some equations using the Wolfram language (TWL) directly in the terminal; on all RPis, full capabilities of Mathematica and Wolfram language are bundled with the OS. This is extremely cool, because Wolfram language is an incredibly powerful functional language that is based around symbolic computation. It’s used for a variety of purposes, and is thus pretty versatile! It seems Wolfram Research is really trying to get their technology out there for more people to get their hands on!

TWL itself doesn’t allow you to access graphics, so I needed the interface of Mathematica to run projects and view graphs/images. I’m mostly going to have to limit myself to very small projects (I am still testing the computational time of Mathematica on the RPi) because it seems that Mathematica runs pretty slow. I know that on the RPi2, Mathematica runs ~10x faster.

rpi_setup3rpi_setup4

The first picture is a basic Monte Carlo random number generator with 10000 points. The second picture is just me opening up the Mathematica interface!

As you can see, I accessed Mathematica via the desktop; when you boot up the RPi, it defaults to a large terminal window. However, you can access the desktop with the command startx.

rpi_setup6

Here’s what the desktop looks like!

One of the other issues I ran into, when trying to write in TWL, is the configuration of the keyboard. The default is a standard British keyboard I’m pretty sure, because the “#” symbol was replaced with the “₤” symbol. This is easily fixed by accessing raspi-config and following the instructions for changing the keyboard there.

As I continue to explore the capabilities of the RPi for computational mathematics projects, I’ll be sure to keep y’all updated as to what I learn/observe! Thanks for sticking around!

Poisson Processes as a Limit of Shrinking Bernoulli Processes (aka: pt.2)

So what exactly are Poisson processes useful for? Consider buses in a queue. Whenever a bus arrives a person enters the bus. The amount of time a person waits for the bus (that is, the amount of time before and up until an event occurs) is a random variable with some probability distribution. We can model bus arrivals through a sequence of these random variables,(which constitute their interarrival times). Specifically, we can approximate a Poisson process from this, provided our conditions fulfill various properties and definitions.

The previous post about Poisson processes was more or less describing how they behave, not really a definitive grasp of what they are based on the properties they have. By giving a set of definitions, one can construct a process that follows all definitions. This process is then described accordingly.

Definition: a Poisson counting process, N(t), is a counting process (that is, the aggregate number of arrivals by time t, where t > 0) that has the independent and stationary increment properties and fulfills the following properties:

Screen Shot 2015-05-04 at 7.14.53 PM

1) says that the probability of no arrivals is 1-λδ = 1-λ2j

2) says that the probability of exactly one arrival is λδ = λ2j

3) says that the probability of 2 or more arrivals is the higher order term upon expansion, which is essentially negligible.

Basically, what these properties are telling us are that there can be no more than one arrival at a time! The term 2j is really important, and we’ll see why in a little bit.

So how does this relate to a Bernoulli process? Well, Poisson processes are continuous time processes, and thus have an uncountably infinite set of random variables. This is particularly difficult to work with, so we approximate a continuous time process with a discrete time process. In this case, we can use a Bernoulli process, with some extra parameters, to approximate a Poisson process (as it turns out, the Bernoulli process is the correct approximation to use, and we’ll see why in a bit).

A Bernoulli process is an IID sequence of binary random variables i.e.; {X1, X2, X3, …, Xn} for which px(1) = p and px(0) = 1-p. So if a random variable has the value of 1, that is Xi = 1, there is a bus arrival at time i. If Xi = 0, there is no arrival at time i.

To make this process useful to approximate a Poisson process with, we “shrink” the time scale of the Bernoulli process s.t. there exists a j > 0, where Xi has either an arrival or no arrival at time i2-j.  This new time, i2-j, come from the fact that we are splitting the length of the interarrival period i by half, and then half, and so on, where each new slot has half the previous arrival probability.

To give a visual understanding of this, imagine that this is our original distribution timeline:

Screen Shot 2015-05-04 at 9.58.19 PM

For the first Bernoulli process (imagine there is a 1 as a superscript for each rv). Now, if we split the length of the interval in half,

Screen Shot 2015-05-04 at 10.02.38 PM

we get the 2nd Bernoulli process. This can go on and on, j times. This is done in such a way that the rate of arrivals per unit time is λ, thus Xi ~ Bernoulli(λ2-j).

So for each j, the “j-th” Bernoulli process has an associated counting process,

Screen Shot 2015-05-04 at 10.12.23 PM

Screen Shot 2015-05-04 at 10.16.12 PM

which is a binomial PMF.

Now we’re ready to tackle Poisson’s Theorem, which says:

Consider the sequence of shrinking Bernoulli processes with arrival probability λ2-j and time-slot size 2-j. Then for every fixed time t > 0 and fixed number of arrivals, n, the counting PMF approaches the Poisson PMF (of same λ) with increasing j.

That is to say,

Screen Shot 2015-05-04 at 10.23.29 PM

Proof:

Screen Shot 2015-05-04 at 10.36.39 PM

From the proof, we can see that a shrinking Bernoulli process is the appropriate approximation for a Poisson process, because the limit of the binomial PMF converges to the Poisson PMF.

Poisson Processes (pt. 1)

For the last few weeks I’ve been working to wrap my head around something called Poisson processes. A Poisson process is a simple and widely used stochastic process for modeling the time at which arrivals enter a system. If y’all have taken an intro probability course, y’all might’ve hear of something called the Poisson distribution — which is simply the probability of getting exactly n successes in N amount of trials. Like the Poisson distribution, a Poisson process is essentially thought of a the continuous-time version of the Bernoulli process (not trying to imply the Poisson dist. is continuous, but it is the limit of the Bernoulli dist. taken to infinity!)

For a Poisson process, arrivals may occur at arbitrary positive times, and the probability of an arrival at any particular instant is zero. This means that there’s no very clean way of describing a Poisson process in terms of the probability of an arrival at any given instant. Instead, it’s much easier to define a Poisson process in terms of the sequence of interarrival times, that is, the time between each successive arrival.

This chapter has been taking me forever to get through, primarily because it’s soooo large; this book really breaks it down into manageable (and all equally important) subsections. The chapter starts by giving you a Poisson process, then describing more generally what an arrival process is, and from there, talking about the important properties of Poisson processes which make it the most elegantly simple renewal process (which are also a special type of arrival process).

There really isn’t much to know about what exactly a Poisson process is; they are characterized, perhaps predictably, by having exponentially distributed inter arrival times. Explicitly, we say that

  1.  a renewal process is an arrival process for which the sequence of inter arrival times is a sequence of positive identical and independently distributed (IID) random variables.
  2. A Poisson process is a renewal process in which the inter arrival times have an exponential cumulative distribution function; i.e.; for some real λ > 0, each X_i has a density specified as f_X(x) = λexp(-λx) for all x greater than or equal to 0.

The parameter λ is the rate of the process, and remains constant despite the size of the interval.

Probably the coolest thing about a Poisson process is that, by nature of it’s exponential distribution, it has a memoryless property. This means that, if you suppose X is a rv denoting the waiting time until some arrival, and the arrival occurs at time t, then the distribution of X is the same for all times x < t. That is to say, that the distribution of the remaining time until an arrival, is exactly the same as the distribution of the original waiting time. This is denoted as:

Screen Shot 2015-03-10 at 3.32.09 PM

We can use this memoryless property to extrapolate certain ideas about how the distribution of the first arrival in a Poisson process behaves, as well as how this first arrival (after time t) is independent of all arrivals up to and including time t.

Following this idea of a memoryless property that is attributed to exponential processes, we can show a lot more things as well like the idea of stationary and independent increment properties (though I’m not going to go into detail just because it’d make this post very lengthy).

There are other ways to define a Poisson process which might be more intuitive than this, but I like this way of describing what exactly it is and how we can use it to model simplistic stochastic processes. I might write up a post later on how to build up what a Poisson process is from its properties (as opposed to being given a Poisson process, and describing how it behaves from there) because I think a lot of the mathematical nuance is lost in this definition, but it’s way more practical.

APS March Meeting 2015: Impressions

I spent two days attending the American Physical Society’s March meeting in San Antonio, and wow what a time. It was all pretty overwhelming: there were so many companies, so many talks, and so many…well, people!

Day 1:

I joined in on the shenanigans on Tuesday morning: overslept and missed my first class (whoops) but eventually made my way to my bff’s apartment, where we then headed off at 11:20am to San Antonio from Austin. First step upon arrival: parking. It’s always a hassle to park in the city, and this was no exception. We decided on a parking garage with a flat rate that was close to the convention center. Anyway, so, we got there. We walked in. And we took in the smell of B.O. and science…

Registration was a breeze: if you pre-register online, you’re good to go. Highly recommend doing that, otherwise it’ll take upwards 20 minutes to be fully registered for the meeting as well as with APS.

So basically, every instrumentation company under the sun had exhibits at the meeting, and were itching to get you to listen to their product spiel. It’s worth it — the free stuff you get is awesome. The first exhibit I hit up was OriginLab, which focuses on complex data analysis and graphing software. I, being the loyal Mathematica user that I am, was skeptical of this product’s performance. Luckily, I received a 21 day trial of their product to test out and see if I like it (expect a review of that sometime in the near future; (x,t) is TBD).

Next, my bff and I went to check out the exhibit run by the International Atomic Energy Agency (IAEA); it was weird to see them in a setting like this — generally, APS meetings focus on condensed matter physics, electronics, and materials science/engineering physics. Nuclear physics is usually separate, and many nuclear organizations have their own conferences. But it was nice nonetheless! The women in charge of the exhibit talked to us about potential internships in Vienna, and had us sign up to receive notifications of when our interests coincided with an opening!

I then got a bit tired of talking to people, so I headed towards the poster session area, where every level of physicist (undergrad, grad, postdoc) was talking about their particular field of research. This was when I also ran into some peers I had met at a previous conference! It’s really cool to make friends through conferences and to see each other grow professionally.

My bff and I then decided to check out some of the talks. The first two talks we listened to were of my choosing, and were over the topic of quantum monte carlo simulations of fermion and boson. The actual physics discussed during these talks were way over my head (it assumed a pretty advanced knowledge of quantum as it pertains to dots and wells) but the mathematics was pretty easy to follow. The next couple of talks were of my bff’s choosing and were over the physics of neural systems. The physics was very elementary, in that it discussed the mechanics of active potentials that results in a traveling wave solution (I know a good deal about waves and optics; it was nice to be able to follow the physics for once!).

For the last talk we listened to, a friend of mine that came with us was giving a talk over his research. I honestly don’t remember what it was about, because it was pretty advanced stuff (and I am also trying to remember it from two days ago), but he did a good job with presenting it!

Lastly, as we were just getting ready to leave, we spotted it…the one and only…the Wolfram exhibit! It was hiding in the corner. Unfortunately, it was past 5pm so the screens illustrating the newest technologies from the company were turned off. However, I did get to speak with a couple of the representatives, and learned that Wolfram will be doing some pretty cool showcases for the upcoming SXSWedu! We exchanged contact information, and then I had to leave the conference. But not before taking a selfie:

IMG_2426

Day 2:

My bff couldn’t come on this day to the conference, so I tagged along with another friend and we headed over there today (Wednesday) at about 11:30am, arriving at about 1:30pm. Unfortunately, due to work constraints, we could only be there for a max of two hours. I really wanted to check out the conference today because it was Industry Day! So lots of companies would be actively recruiting physicists to come work for their company. And I’m all about that.

So, we spent the time mingling with the companies within the exhibits more in depth: first, I spotted another nuclear exhibit in the sea of hardware exhibits…Oakridge National Labs (ORNL) and the National Institute of Standards and Technology (NIST)! I spoke with the woman representing NIST for quite some time; she had a lot of information to give me about internship opportunities with NIST through something called their SURF program, which allows students to get hands on experience with the latest technologies, including working at neutron research reactors (there are only a couple in the US).

Next, we visited TeachSpin which designs instruments for experiments in advanced physics lab courses. The lady representative gave us a catalog of products, and discussed some of them. Perhaps the most interesting one was the apparatus for the experiment over optical pumping of rubidium vapor — she went really in depth on the conceptual understanding (all of which is available online for free!). Apparently, this particular apparatus was considered so aesthetically beautiful that an artist decided to buy one, enclose it in glass, and submit it to a museum. Anyway, the physics she was discussing with us was extremely fascinating, and she was so engaging and interactive that I kinda wish she was one of my professors this semester!

Lastly, we visited the APS public outreach exhibit, where the rep there gave us some free science comics aimed at middle school aged children:

IMG_2432 IMG_2433

Why didn’t I have these when I was younger?? I also mentioned that both my friend and I were officers of UT Austin’s Undergraduate Women in Physics, and he offered to send us a box of free supplies and stuff when we perform demos for young girls! That was really generous, and I’m really stoked at having all this cool stuff the next time we do outreach activities.

To sum it all up:

Overall, it was a really fun experience, and I’m glad I went! It wasn’t as much of a networking opportunity as I’d imagined it would be, but that is likely because of the sheer amount of people (literally thousands of physicists were there) but I received a handful of contacts to email in the near future (which I should start doing pretty soon). The talks were pretty decent, most were a bit inaccessible for me as an undergraduate, but the exhibits were fun and the free stuff…you can’t beat free stuff, seriously. Though I think the best part about the whole conference was just being able to do something different! Classes are nice and fine, but being able to mingle with people (who work in your field no less) in a different kind of setting is a nice change of pace, and getting out of town can sometimes be needed during a stressful semester!

Anyway, if you’ve made it through this long post, then I really admire your literary willpower. Thanks for sticking around!