# Helmhurts

A few posts back I was concerned with optimising the WiFi reception in my flat, and I chose a simple method for calculating the distribution of electromagnetic intensity. I casually mentioned that I really should be doing things more rigorously by solving the Helmholtz equation, but then didn’t. Well, spurred on by a shocking amount of spare time, I’ve given it a go here.

UPDATE: Android app now available, see this post for details.

The Helmholtz equation is used in the modelling of the propagation of electromagnetic waves. More precisely, if the time-dependence of an electromagnetic wave can be assumed to be of the form $\sin(\omega t)$ and the dispersion relation given by $\omega = ck/n(x)$ for some refractive index distribution $n(x)$, then the electric field $E$ solves

$\nabla^2E + \frac{k^2}{n^2}E = f(x)$

where $f(x)$ is some source function. Given a source of radiation and a geometry to propagate in, in principle the Helmholtz equation can be solved for the entire radiation field $E(x,y,z)$. In practice this may not be so simple. Here I chose to model a situation in 2D, and set up a computational grid of size $N \times M$ with grid cells labelled $(i,j)$ for $1 < i < N$, $1 < j < M$. Given this discretisation, the equation above becomes

$\frac{E(i+1,j) + E(i-1,j) - 2E(i,j)}{\Delta x^2} + \frac{E(i,j+1) + E(i,j-1) - 2E(i,j)}{\Delta y^2} + \frac{k^2}{n(i,j)^2}E(i,j) = f(i,j).$

This is a linear equation in the ‘current’ cell $E(i,j)$ as a function of its 4 neighbours. Each cell has an equation describing the relationship with its neighbours, so there are $NM$ equations in $NM$ unknowns. This motivates a linear algebraic approach to the problem – if all equations can be represented as one giant matrix equation, that matrix can be inverted and an exact solution for $E$ recovered. In particular we’ll have $\mathbf{M}\mathbf{E} = \mathbf{f}$ for some matrix $\mathbf{M}$, and we can compute $\mathbf{E} = \mathbf{M}^{-1}\mathbf{f}$.
This is slightly tricky due to the fact that a 2D labelling system $(i,j)$ needs to be converted to a 1D labelling system $n$, as the 2D simulation domain needs to be converted to a 1D vector. I use the translation that $n = M(i-1) +j$ so that

$(1,1) \rightarrow 1 \\ \vdots \\ (1,M) \rightarrow M \\ (2,1) \rightarrow M+1 \\ \vdots \\ (N,M) \rightarrow NM.$

A pair of cells $(i,j)$ and $(i+1,j)$ are then separated by $M$ in this new labelling system, and a pair of cells $(i,j+1))$ and $(i,j)$ separated by 1.
The row in the matrix equation corresponding to the $(i,j)$th cell looks like

$\left(\begin{array}{ccccccccc} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \dots & \frac{1}{\Delta x^2} & \dots & \frac{1}{\Delta y^2} & -\frac{2}{\Delta x^2} - \frac{2}{\Delta y^2} + \frac{k^2}{n(i,j)^2} & \frac{1}{\Delta y^2} & \dots & \frac{1}{\Delta x^2} & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{array}\right) \left(\begin{array}{c} \vdots \\ E(i-1,j) \\ \vdots \\ E(i,j-1) \\ E(i,j) \\ E(i,j+1) \\ \vdots\\ E(i+1,j)\\ \vdots \end{array}\right)= \left(\begin{array}{c} \vdots \\ f(i-1,j) \\ \vdots \\ f(i,j-1) \\ f(i,j) \\ f(i,j+1) \\ \vdots\\ f(i+1,j)\\ \vdots \end{array}\right)$

where there are $M$ blank cells between the $1/\Delta x^2$ and $1/\Delta y^2$ cells. In fact, it is clear that the vast majority of the matrix is zero, which can help when considering the sheer size of the matrix. For this problem a room is approximately 5 m across, and the wavelength to resolve is around 5 cm. We will require $N > 500$ or so then, which means the number of elements is around $10^{10}$! Storing each as a single precision number would require around 60 GB of RAM, and my poor battered laptop wouldn’t have a chance inverting that matrix even if it would fit in memory.
Fortunately this problem has crept up on people cleverer than I, and they invented the concept of the sparse matrix, or a matrix filled mostly with zeros. We can visualise the structure of our matrix using the handy Matlab function spy – as plotted below it shows which elements of the matrix are nonzero.

Here $NM = 28$, so there are 784 elements in the matrix. However only 64 of those are nonzero, just 8% of the matrix is actually useful in computation! This drops to 4% if the resolution of our grid jumps by a factor of 10, and stays around 5% for another factor of 10. There is a special data structure in Matlab for sparse matricies, and using it speeds up calculation of inverses by orders of magnitude. In this case the code is literally a hundred times faster and uses a fraction of the memory.
Moving on to the physics then, what does a solution of the Helmholtz equation look like? On the unit square for large $k$, the quickest way is actually to use a packet of rice – see e.g. here.
In this calculation I’ve set boundary conditions that $E = 0$ on the edges of the square, and set $f(x,y) = \delta(x,y)$. It turns out that the Helmholtz equation can also be applied to the modelling of the forced vibrations of the square plate, where the choice of conditions above equates to applying a force at the centre of the plate and clamping the edges. The rice/sand/etc settles at the nodes of the plate, i.e. the positions which stay stationary in the oscillation.
Visualised below are a selected few pretty pictures where I’ve taken a logarithmic colour scale to highlight the positions of zero electric field – the ‘nodes’. As the animation proceeds $k$ starts high and gradually gets lower, corresponding to fast plate oscillations gradually getting slower.

Once again physics has turned out unexpectedly pretty and distracted me away from my goal, and this post becomes a microcosm of the entire concept of the blog…
Moving onwards, I’ll recap the layout of the flat where I’m hoping to improve the signal reaching my computer from my WiFi router:

I can use this image to act as a refractive index map – walls are very high refractive index, and empty space has a refractive index of 1. I then set up the WiFi antenna as a small radiation source hidden away somewhat uselessly in the corner. Starting with a radiation wavelength of 10 cm, I end up with an electromagnetic intensity map which looks like this:

This is, well, surprisingly great to look at, but is very much unexpected. I would have expected to see some region of ‘brightness’ around the electromagnetic source, fading away into the distance perhaps with some funky diffraction effects going on. Instead we get a wispy structure with filaments of strong field strength snaking their way around. There are noticeable black spots too, recognisable to anyone who’s shifted position in a chair and having their phone conversation dropped.

What if we stuck the router in the kitchen?

This seems to help the reception in all of the flat except the bedrooms, though we would have to deal with lightning striking the cupboard under the stairs it seems.

What about smack bang in the middle of the flat?

Thats more like it! Tendrils of internet goodness can get everywhere, even into the bathroom where no one at all occasionally reads BBC News with numb legs. Unfortunately this is probably not a viable option.

Actually the distribution of field strength seems extremely sensitive to every parameter, be it the position of the router, the wavelength of the radiation, or the refractive index of the front door. This probably requires some averaging over parameters or grid convergence scan, but given it takes this laptop 10 minutes or so to conjure up each of the above images, that’s probably out of the question for now.

UPDATE

As suggested by a helpful commenter, I tried adding in an imaginary component of the refractive index for the walls, taken from here. This allows for some absorption in the concrete, and stops the perfect reflections forming a standing wave which almost perfectly cancels everything out. The results look like something I would actually expect:

END UPDATE

It’s quite surprising that the final results should be so sensitive, but given we’re performing a matrix inversion in the solution, the field strength at every position depends on the field strength at every other position. This might seem to be invoking some sort of non-local interaction of the electromagnetic field, but actually its just due to the way we’ve decided to solve the problem.

The Helmholtz equation implicitly assumes a solution independent of time, other than a sinusoidal oscillation. What we end up with is then a system at equilibrium, oscillating back and forth as a trapped standing wave. In effect the antenna has been switched on for an infinite amount of time and all possible reflections, refractions, transmissions etc. have been allowed to happen. There is certainly enough time for all parts of the flat to affect every other part of the flat then, and ‘non-locality’ isn’t an issue. In a practical sense, after a second the electromagnetic waves have had time to zip around the place billions of times and so equilibrium happens extremely quickly.

Now it’s all very well and good to chat about this so glibly, because we’re all physicists and 90% of the time we’re rationalising away difficult-to-do things as ‘obvious’ or ‘trivial’ to avoid doing any work. Unfortunately for me the point of this blog is to do lots of work in my spare time while avoiding doing anything actually useful, so lets see if we can’t have a go at the time-dependent problem. Once we start reintroducing $d/dt$‘s into the Helmholtz equation it’s actually not an approximation at all any more and we’re back to solving the full set of Maxwell’s equations.

This is exactly what the FDTD technique achieves – Finite Difference Time Domain. The FD means we’re solving equations on a grid, and the TD just makes it all a bit harder. To be specific there is a simple algorithm to march Maxwell’s equations forwards in time, namely

$E_x(t + \Delta t) = E_x(t) + \frac{\Delta t}{\epsilon}\left(\frac{\partial H_z(t)}{\partial y} - \frac{\partial H_y(t)}{\partial z} - I_x(t)\right) \\ E_y(t + \Delta t) = E_y(t) + \frac{\Delta t}{\epsilon}\left(\frac{\partial H_z(t)}{\partial x} - \frac{\partial H_x(t)}{\partial z} - I_y(t)\right) \\ E_z(t + \Delta t) = E_z(t) + \frac{\Delta t}{\epsilon}\left(\frac{\partial H_y(t)}{\partial x} - \frac{\partial H_x(t)}{\partial y} - I_z(t)\right) \\ \\ H_x(t + \Delta t) = H_x(t) - \frac{\Delta t}{\mu}\left(\frac{\partial E_z(t)}{\partial y} - \frac{\partial E_y(t)}{\partial z}\right) \\ H_y(t + \Delta t) = H_y(t) - \frac{\Delta t}{\mu}\left(\frac{\partial E_z(t)}{\partial x} - \frac{\partial E_x(t)}{\partial z}\right) \\ H_z(t + \Delta t) = H_z(t) - \frac{\Delta t}{\mu}\left(\frac{\partial E_y(t)}{\partial x} - \frac{\partial E_x(t)}{\partial y}\right).$

I’ve introduced a current source $I$ and the relative permittivity/permeability $\epsilon$ and $\mu$ such that $c/n = 1/\sqrt{\epsilon\mu}$. We can use all of the machinery mentioned above to discretise the flat layout into a grid, but this time repeatedly apply the above equations. All 6 variables are stored on the grid and updated for every time step. The RAM requirements aren’t so strict in this case which would allow me to model the full flat down to millimetre resolution, but this is just too slow. A full simulation might take all day, and even I can’t justify that. Running one at a longer wavelength $\lambda =$ 30cm allows me to drop the resolution enough to run a quick simulation before I go to bed.

With the router in approximately the correct place, you can see the radiation initially stream out of the router. There are a few reflections at first, but once the flat has been filled with field it becomes static quite quickly, and settles into an oscillating standing wave. This looks very much like the ‘infinite time’ Helmholtz solution above, albeit at a longer wavelength, and justifies some of the hand-waving ‘intuition’ I tried to fool you with.

## 121 thoughts on “Helmhurts”

1. irnavash says:

Great simulations :) Thanks for sharing.

My only concern is the bandwidth of your signal . In most WLAN networks there is about 20MHz of radio bandwidth available. You may simulate this using different subcarriers at different frequencies, as in OFDM systems. Standard WLAN OFDM supports 52 subcarriers with 312.5KHz subacarrier spacing, i.e. from -26 to +26, avoiding zero, around the center frequency. Such a wideband simulation should have a big impact on the results.

Regards

Like

1. Mark Browning says:

Would it be appropriate to directly model the subcarriers as 52 narrowband sources in the E-field? I don’t know much, but reading the specs and documentation about OFDM in 802.11, the subcarriers are FFT’d to a 4us long frequency domain signal that is actually modulated onto the carrier. Looking a a spectrogram for a WiFi frame, it looks pretty uniform over fc+-9 MHz.

I’m trying to reproduce this myself (Helmholtz solution), but part of my problem is pinning down what to use as f(x) for the source, narrowband or otherwise. I’ve got a decent handle on the mathematics, but my EM physics is failing me…

One good thing: I doubt the refractive index of the materials involved is that intensely frequency dependent, so constant values (spatially varying) still seem ok.

Like

1. Ooh, how are you introducing a finite bandwidth to the Helmholtz solution? For the FDTD method I added in some channels at +- 312.5 kHz as suggested, but over the length of the simulation (~ns) there wasn’t any difference – I guess there will be a modulation over timescales ~1/312.5kHz, but the simulation doesn’t get that far. If there’s a way of implementing it with the Helmholtz method then that’s the way to do it.

Like

2. Mark Browning says:

So here is what I’ve come up with. Its technically not the Helmholtz equation, but it is pretty close:

Assume a separable solution to the wave equation:

A(r,t) = A(r)A(t), as with Helmholtz.

Instead of assuming A(t) is of the form Re{e^(i*w*t)}, the new assumption is int( e^(i*w*t)dw,w-B,w+B), which is a perfectly reasonable linear combination of the general sinusoid shape. This is a uniformly distributed in the passband region around angular frequency w, with width 2B. Probably not perfectly realistic for the modulation scheme, but at least wideband.

Solving the wave equation with A(r,t) as above produces laplace(A(r)) = (1/(c^2*A(t) * d^2 A(t) / dt^2). Setting both sides equal to -k^2 (arbitrary choice, same as Helmholtz) yields the same spatial equation. However, the usual substitution of k=w*c doesn’t hold for A(t). Instead, we find that k^2 = ( w^2 + s^2/3)/c^2. I have yet to solve the spatial Helmholtz with the modified k, but this is the direction I’m leaning.

I’ve also toyed around with various other linear recombinations of sinusoids, including sum( e^(i*v*t), v = w-26*s, w+26*s) to model 52 different subcarriers centered around w. This yields k^2 = ( w^2 + 384 s^2) / c^2 .

My intuition of the Helmholtz solution isn’t very good, but this doesn’t seem like it will appreciably change the results. 20MHz out of 2400MHz isn’t much.

It is possible that there isn’t a separable solution as I assumed in the first step above… but my intuition is that eventually a standing wave will be established which is time independent.

I’m planning on using FEniCS as my PDE solver of choice; any experience with it?

Like

3. Mark Browning says:

Hmm… no way to edit comments, but I meant k^2 = ( w^2 + B^2/3 ) / c^2 for the uniform wideband frequency source.

Like

4. irnavash says:

You may dig into FDTD literature for implementation ideas. With 52 narrowband channels, the summation of the EM field strength from all subcarriers should give you an upper bound for the field strength at each point. In fact, the actual field is frequency dependent.

About OFDM spectrum, after IFFT there are additional filters, hence the spectrum is much smoother. However, what you see in the documentation is the spectrum mask. In practice there is a deep notch in the center and the power at each subcarrier is adjusted adaptively. However, any wideband model, as you suggested, should suit for this purpose.

And the outcome should be quite different between narrowband and wideband channels, depending on the environment. 20MHz is substantially wider than the coherence bandwidth of most urban radio channels. In an small flat model with non-reflective objects and weak transmit power the difference can be small. A realistic model, should incorporate some highly reflective metallic surfaces and the surrounding apartments as well.

Like

2. I am genuinely grateful to the owner of this web
site who has shared this fantastic piece of writing at here.

Like

3. I like readsing through a post that will make men and womjen think.
Also, many tnanks for permitting me to comment!

Like

1. That’s fantastic! I’m glad you enjoyed the blog post. How large did you manage to make a room? Memory constraints were why I avoided using a sparse Helmholtz solver in the app.

Like

1. frederic says:

I tried bigger rooms on a desktop with more RAM today :

At 1000×1000 pixels : allocated mem ~ 2Gb, 45 sec for solving.

And 2000×2000 pixels : allocated ~ 9Gb, 330 sec for solving

Like

2. Stiven says:

In [23]:
f = fill (0. + 0.im (dim X, dimy))
f [80:82, 160: 162] = 1.0;

BoundsError ()

in checkbounds at abstractarray.jl: 65
in setIndex! multidimensional.jl at: 61 ”

I posted that on https://groups.google.com/forum/#!topic/julia-dev/maBuEwR61NM, I will apreciate if you help me with that problem.

Sorry for my English.

Like

4. I’m nnot that uch of a intrernet reader
to be honest but your sites really nice, keeep it up!
Alll the best

Like

5. Daniel says:

What effect do you think neglecting the third dimension has on the result?
Router and receiver might be at different heights, etc.

(Obviously computation time and memory are the main limitation)

Like

1. Hassan Nas says:

There are quite strong reflections coming from the ceiling and floor. If the access point is very close to the ceiling or floor these reflections are very strong. In practice you always need a 3D model for realistic simulations. Moreover, the access point is not propagating uniformly in 3D. The antenna (combined with the device) has a pattern. If you keep the antenna vertical, propagation in horizontal plane is much stronger than the vertical plane, hence the walls play a bigger role than than the ceiling or floor.

Like

1. Hassan Nas says:

At the end, what is more important is the material of the walls. If your ceiling has a metallic surface or metallic structure, but the walls are made of wood and plaster, then the effect of the ceiling would be much stronger. For a precise simulation you need to model every wall, window, door and furniture (e.g. fridge) with their specific parameters.

Like

2. Hassan Nas says:

About the height, assuming dipole antennas at vertical direction, it is better to keep everything at the same height. The height of the device is very determining, especially with longer antennas that have a flat pattern. In close distnace line-of-sight scenarios, the effect of changing heigh could be much stronger than changing the location.

Like

6. Reblogged this on The Cryptosphere and commented:
Math! Wifi! Optimization! Waaaaay more fun than it sounds! How to use math to calculate the best spot for your router, copiously illustrated and featuring the geekiest, prettiest gif you’ll see today!

Like

7. Oviyes says:

Like

8. Matthew Miller says:

Please consider posting your Matlab code on the Mathworks file exchange. Other Matlab users like myself would enjoy playing with your code.

Like

9. Unquestionably believe that which you said. Your
favorite justification seemed to be on the internet the
simplest thing to be aware of. I say to you, I definitely get annoyed while people consider worries that they just
do not know about. You managed to hit the nail upon the top and also defined out the
whole thing without having side effect , people can take
a signal. Will probably be back to get more. Thanks

Like

10. Whatthefoxsay says:

Nice, looks cool.. Just a few questions, why is your source this big?? how did you model the antenna here, and is it made to resonate at 2.4ghz??

How is the space discretised? or how is the space(room) gridded? you talked about gridding down to millimeter but that you didn’t because of the computation time. Can you comment abouth how you set the time steps and space grids (dx and dy) versus wavelength??

Like

1. Thanks. The source is actually 1 grid cell in size, approximating a delta-function source. In the Helmholtz case, I specify the vacuum wavelength when building the solution matrix, where it enters as ‘k’. In the FDTD case I directly update the source function (here a current density) every time step to oscillate at 2.4 GHz. In the post I use 20 grid points per wavelength (I think…), which is really a lower limit if you want semi-decent accuracy. The time step is then set to be slightly lower than the upper bound given by the CFL condition dt = 1/(c*sqrt(1/dx^2 + 1/dy^2)). If you’re attempting this you’ll notice with an FDTD solver the phase velocity is highest for waves at 45 degrees, so if you use too low a resolution the wavefronts will look square rather than circular. For the Helmholtz case the resolution was limited by available memory on my (suffering) laptop.

Like