首页 > > 详细

COMP-90072讲解、辅导Python,c/c#编程设计、讲解c/c++、讲解Complex Systems 辅导Python程序|讲解

Project guide
The Art of Scientific Computing:
Complex Systems Project
Subject Handbook code:
COMP-90072
Faculty of Science
The University of Melbourne
Semester 1, 2019
Subject co-ordinator: A/Prof. Roger RassoolContents
1 Project 2
1.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Computational elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Progression plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Introduction 3
2.1 Learning outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Sandpiles and statistics 4
3.1 The Bak-Tang-Wiesenfeld model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 The abelian property of sandpiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Practical statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Get coding 10
4.1 Setting up the sandpile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Playing with the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Extensions 12
5.1 Extension: earthquake models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Extension: lattice gas automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6 Assessment tasks 15
6.1 Tasks for your mid-semester report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.2 Tasks for your final report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
7 Further reading 16
11 Project
This project will acquaint students with some of the computational elements at play in some
numerical simulations. To flesh out some interesting statistics (here, interesting means nonGaussian),
this project begins with a simple sandpile model, in which very large avalanches
can occur more often than our usual statistical models would predict. The project will combine
simulations like these, conducted on a physical grid, and summarise the results as a series of
statistical averages, which will then be analyses in kind. Students will then pursue their own
extension topic, applying lessons in programming, numerical statistics and critical behaviour
to answer a question that simply cannot be handled with pen-and-paper mathematics alone.
1.1 Software
The purpose of this computational project is to understand some of the coding architecture
behind statistics-based simulations. To that end, we don’t want to obfuscate the functional
details of our sandpile programs but we don’t want to write random number generators from
scratch either. We’d like to avoid some high-level languages like Matlab or Mathematica, but
calling libraries within other languages is fine. We advise that you use one of:
Python
C or C#
Fortran
1.2 Computational elements
To complete the project, the following numerical techniques are required:
Random number generation and Monte Carlo methods.
Data collection and organisation.
Numerical integration of probability density functions.
1.3 Progression plan
This project consists of 4 steps that should be completed in order. The last of these steps
should occupy about half of your semester’s work.
1. Using a computer language of your choice, write a program that produces an N ×N array
which stores integer numbers, and a routine that can manipulate the sites in this array
given an address (i, j).
2. Investigate ways to generate (pseudo-)random numbers, and apply what you learn to a
sandpile model with grains added to randowm sites. Add an option to your code which
drops grains with a Gaussian probability distribution (co-centred on the sand pile).
3. Write a toppling routine that searches through your sandpile appropriately and initiates
avalanches when necessary. Include to this routine (or to another place in your program)
something that calculates the necessary statistics of an avalanche from start to finish
(avalanche lifetime, avalanche size and so on).
4. Based on your independent research, find an extension project of your own (or ask your
demonstrator for ideas!). It must address a problem to which no closed-form answer exists,
and you should follow your scientific curiousity down any avenues that open up. Try to
stick to problems that can be modeled with an array, using random number generation, can
be analysed with (non-Gaussian) statistics, and exhibits some form of critical behaviour.
22 Introduction
Intuitively, we like to think that catastrophic events have equally catastrophic causes. In physics
terms, we might postulate some unusual circumstances that perturb the energy of the system
proportionally to the resulting catastrophe. An easy example to think about is the extinction
of the dinosaurs – it was a very large and high-energy meteorite that impacted the Earth and
set off a catastrophic chain of events. Or in more recent times, we might look to the Great
Depression, traced back to the Wall Street crash in October 1929. There are many examples
of big events with big causes, but we should not be led to believe that this is the only way dramatic
change can occur. Often we will be interested in large dissipative systems, usually with
many interconnected components. In systems like this, frequently we will find that system-level
change hinges on the drop of pin or, as we will see in this prac, on a single grain of sand.
The laws of physics are well defined and absolute. A naive physicist might end the story
there. Just write down the Hamiltonian or Lagrangian and you’re set! Unfortunately, for
systems with huge numbers of degrees of freedom and many interacting components, the equations
of motion become computationally intractable. Solving for the exact dynamics might
take longer than the human race has left on earth. As a result, we need heuristic theories that
allow us to study the dynamics of complex systems. One such theory is that of self-organising
criticality.
Self-organising criticality was first proposed in 1987 by Bak, Tang and Wiesenfeld. The
theory states that a large and complex system with short range interactions naturally evolves
into a critical state. What does this mean? Well there are different definitions, but to put it
simply, a small perturbation to an element of a system in a critical state will result in a response
that can affect any number of elements in the system. To put it simply-er, the same mechanism
that leads to a minor response can also lead to a catastrophic response. For a system to
‘naturally evolve’ into this state, it must have some slow driving mechanism, and a means of
dissipation. For example, the Earth’s crust consists of tectonic plates, slowly moving against
each other. The microscopic components can be thought of as rigid particles, each subject to
straining force. In this case the shearing movement of tectonic plates drives the system. When
the strain on one particle exceeds a critical value, the particle ‘slips’ and transfers the strain to
its neighbouring particles. Depending on the state of the surrounding particles, this dissipation
of energy could result in further particles slipping. This process, as one might imagine, can
become extremely large (as in an earthquake) or may just involve a single event that slightly
lowers the energy of the system.
Since we cannot study exact system dynamics, we must rely on statistical approaches to get
an insight into what is happening. In driven, critical systems like the ones we will be studying,
catastrophic events are often not distributed, in a probabilistic sense, as we might expect. To
really get a handle on what is happening we will need to enter the world of non-equilibrium
statistical mechanics – goodbye nice, normal distributions, and hello power law behaviour and
scale-free networks. Non-equilibrium statistics underlie how we think about systems that display
self-oragnising criticality. In terms of the theory itself, one important thing to understand
is the sense in which it is holistic. When a system is critical, microscopic mechanisms (local increases
in stress, for example) are not responsible for the macroscopic observables of the system
(think earthquake size, duration, area etc). In particular the proportions of large and small
events do not depend on the exact microscopic details of what is happening (where exactly local
stress increases occur, for example). Consequentially, one cannot analyse the ‘inner workings’
of a system at the smallest scale, and from there try to explain the effects occurring on a larger
scale. In plain language, it is vain to hope that detailed understanding of a part of the system
will translate into system-level understanding.
3Self-oragnising criticality as a framework has been used to study an extraordinarily diverse
range of systems. It has been used to characterise, analyse and predict earthquakes, financial
markets, evolution and extinction events, pulsar glitches, neuronal behaviour and much more.
2.1 Learning outcomes
1. An introduction to phenomenological/heuristic modelling in the study of complex systems.
2. Build, from the bottom-up, a physical simulation. Part of this involves being technically
adept (being able to write a reasonably-optimised computer program and collect the data
that arises), and part of this involves being physically astute – recognising which physical
variables are important and which ones are not. In turn, this speaks to the ability to
decide what makes this kind of model ‘good’ or ‘bad’, or at least effective or ineffective.
3. Introduce (broadly) the study of non-equilibrium statistical mechanics and specifically the
non-Gaussian statistics that underlie much of it.
4. Introduce the idea of self-ordered criticality and 1/f noise, and through this, the appeal
of universality in dynamical systems.
5. Invite students to study some specific areas of science, like granular flow or the mechanisms
that underlie earthquakes.
6. Introduce some practical sampling techniques such as Monte Carlo methods and ergodic
measurements.
7. Introduce students to interdisciplinary thinking. By this, we mean being able to start off
with a mathematically-abstracted model of sandpiles and understand that with a change
of perspective it can be used to think about something like earthquakes too.
8. Introduce some basic data science skills, like finding relevant data in the public domain,
scraping this data from the web and manipulating it into a form that can be read by a
computer program, and then to analysing the modelled data with reference to the empirical
data.
9. Finally, although this prac can be completed in any coding language, it lends itself to a
scripting language, so it is a good chance for students to use Matlab or learn Python.
3 Sandpiles and statistics
3.1 The Bak-Tang-Wiesenfeld model
When modelling a complex system, it is often impossible to create mathematical formalism
that is both sufficiently realistic and computationally or theoretically tractable. As a result,
researchers must come up with simple models that capture the important elements of the physical
system in question. If a model is suitable, it is sometimes possible to extrapolate findings
to inform about various observables pertaining to the original physical system. For systems
exhibiting self-oragnising criticality, there exists a deceptively simple paradigm: the sandpile.
There are a number of different styles of sandpile, but all share the fact that their dynamics
are governed by a set of very simple rules. The model we will be focusing on is called the
Bak-Tang-Wiesenfield model. It is composed of discrete time indices and finite-state cellular
automata.
Consider a table surface, discretised into a N × N grid. At each site (i, j) on the table, a
number is assigned z(i, j, t) corresponding to the number of grains of sand present any given
4time. Imagine initially the table is empty such that z(i, j, 0) = 0. At each timestep, a grain of
sand is added to a random site δ(i, j) such that the state Z(t) of the table at any given time can
be described as Z(t) = P
t
P
(i,j)
z(i, j, 0) + δ(i, j). The sandpile will continue to grow on the
table until it reaches a critical value, marking the instability of a particular site. In this case,
if any site reaches or exceeds z(i, j, t) = 4, the pile will topple, losing four grains of sand, and
distributing them to its nearest neighbours. This process is called an avalanche, the properties
of which will be the study of this lab. Because we are setting four grains as the instability
threshold, we say that the critical parameter is 4. The toppling operation is described below:
z(i, j, t) = z(i, j, t) ? 4 (3.1)
z(i ± 1, j, t) = z(i ± 1, j, t) + 1 (3.2)
z(i, j ± 1, t) = z(i, j ± 1, t) + 1. (3.3)
It is possible that a toppling operation will occur on the boundaries of the table, i.e., when
i ∈ (0, N) and/or j ∈ (0, N). In this case, the sand that would topple to a site off the table is
simply deleted from the system.
Question 1: Why is it important for the table to have finite boundaries? If we were dropping
the sand in a non-random way, how would this answer change?
The system is allowed to evolve until the slow driving mechanism of adding sand is balanced
by the dissipation mechanism of avalanching and sand falling off the table. That is, we let the
system run until we reach statistically stationary state.
Question 2: Explain the difference between a statistically stationary state, and an equilibrium
state. Is the sandpile in equilibrium? In terms of system observables, how might we characterise
this state?
Avalanches are then recorded and statistics are built up about the distributions of their
various observables. To quantify an avalanche event, there are four main observables:
1. The avalanche size S. This quantity is measured by how many grains of sand are displaced
in a single avalanche event.
2. The avalanche lifetime T. This is the number of timesteps it takes for an avalanche to
relax the system to a critical state.
3. Avalanche area A. The number of unique sites toppled in a given avalanche. (A 6= S).
4. Avalanche radius R. This can be measured in a number of ways, but it is essentially a
measure of the distance from the original toppling site that the avalanche reaches. For
the sake of consistency, we will define this as the maximum number of sites away from the
initial site the avalanche reaches.
An example avalanche is shown in the figure below. The avalanche in question can be
characterised by the observables S = 16, T = 4, A = 4 and R = 2.
Figure 1: An avalanche in progress.
Question 3: Is there a difference between sampling a system over time and sampling an ensemble
of systems? If so, under what circumstances? You might like to think about the ergodic
hypothesis here...
53.2 The abelian property of sandpiles
As you might expect, the patterns and states that result from the Bak-Tang-Wiesenfield model
don’t replicate the same dynamics of a read sandpile. There is a great deal more physics going
on in the real world analogue, and consequently the predictions of the model are quite different.
This is an important point – the mathematical sandpile is inherently different to the physical
one. One of the main properties that makes the mathematical sandpile model so convenient to
work with when studying self-oragnising criticality is its Abelian property. Consider a sandpile
in the configurations in Figure (2). The sandpile undergoes an avalanche, and at the next time
step, there are two unstable sites. The question of which order should we topple the sites in
is handled by the Abelian property: it doesn’t matter. In this example, it is easy to see by
testing the two cases manually, that the resulting configuration is the same regardless of which
site is toppled first. However the situation can become non-trivial when toppling one site might
induce further instabilities in the system that wouldn’t occur for the other site were it toppled
first. This can be particularly challenging when trying to track the avalanche temporally as
well. Thus the aim of this section is to prove the Abelian property of the Bak-Tang Wiesenfield
sandpile model, and introduce some of the mathematical formalisms underlying it. From a
broader perspective, this should demonstrate an example of the way that mathematical formalism
can still be helpful in the study of complex systems, even if our usual differential equations
are too complicated.
Figure 2: An avalanche in progress which raises questions about abelian cascades.
To start, let’s take some of the ideas presented in the previous section and represent them
more formally. Our ‘table’ is now represented by the object V , which is a finite subset of Z
d
,
where d is the dimension of our model. For any site x, we introduce a configuration function
η(x) that maps η(x) : V → N, i.e., extracts the number of grains of sand at the position x
on the table. The configuration η itself is therefore a subset of N
V
. Now, a configuration η
can be either stable or unstable, depending on the size of any given η(x) in V at a given time.
As you might expect, a stable configuration corresponds to a table with no elements greater
than a critical parameter k, and an unstable configuration corresponds to that with at least one
value η(x) ≥ k. To formally write this, we need to introduce the concept of the toppling matrix.
The toppling matrix Vx,y
is an operator that stabilises an unstable site x by distributing
its elements to neighbouring sites. It takes two values, corresponding to two sites x, y ∈ V and
updates according to the height configuration of V .
The toppling matrix must satisfy the following conditions:
6Question 4: For each matrix toppling condition, explain its significance and justify its necessity.
The actual definition of the toppling matrix is given by:
1. If x ∈ V then Vx,x = 2d.
2. If x and y are nearest neighbours then Vx,y = 1.
3. Otherwise, V
x,y = 0.
Note that by this definition, our critical parameter k is equal to 2d, where d is the dimension
of the sandpile.
Question 5: Explain and justify each equation in the definition of the toppling matrix.
Now that we have a rigorous and general definition for the toppling matrix, we can define
the toppling operator Tx , which maps a configuration η ∈ N
V
to a new configuration η
Essentially, it chooses a site x ∈ V and alters it and its surroundings based on the value η(x),
and the proximity of each site in V . Formally, this can be written as:
Question 6: Show that Tx commutes for unstable configurations.
The above exercise is the first step in showing that this sandpile model is in fact abelian,
and if avalanches were restricted to a single branching process, we would be done! However an
avalanche begins on an unstable configuration and ends on a stable one, with the possibility of
many topplings in between. For convenience, we introduce the set ?V to represent all stable
height configurations. Therefore, the toppling transformation T is the set of operations that
maps an unstable configuration to a stable one:
T : N
V → V (3.9)
Naturally, this can take multiple iterations of topplings. For example, the toppling transformation
in Figure (1) would be given by
ηt=4 = Tηt=0 = T(2,3)T(1,3)T(1,2)T(2,2)ηt=0. (3.10)
The general toppling transformation can be represented as
where N is the number of instabilities throughout an avalanche. There are important points to
be made here. N must not be infinite, or the system can never again reach its self-organising
critical state. This indicates the importance of boundary conditions, namely that there must
exist dissipative sites, such as those on the edges that remove sand from the system.
Now that we have the underlying mathematical formalisms and basic theoretical work down
packed, the proof that the sand pile is indeed abelian is left as an exercise and test of understanding!
Question 7: Prove that no matter which order we choose to perform the toppling events in,
we will always topple the same sites the same number of times during an avalanche (and thus
obtain the same final configuration).
7The above question should be approached in the following way. Suppose that a certain
configuration η has more than one unstable site. In that situation, the order of the topplings is
not fixed. Clearly, if we only topple site x and site y , the order of these two topplings doesn’t
matter and both orders yield the same result. (In the physics literature, this is often presented
as a proof that T is well defined.) But clearly, more is needed to guarantee this. The problem
is that toppling x first, say, could possibly lead to a new unstable site z, which would never
have become unstable if y had been toppled first.
3.3 Practical statistics
In this section we will give the physicist’s take on the important differences between different
probability distribution functions, and then we will focus on power law distributions to talk
about scale free behaviour, rare events and 1/f noise. The probability distribution function
(pdf) of a random variable quantifies the likelihood that a particular sample of the random
variable will return a value within a given range. If x is the random variable, then its pdf p(x)
is defined so that p(x) dx equals the probability that the returned x value lies between x and
x+dx. Normalisation guarantees that the integral of the pdf over the whole domain of x is unity.
The following exercises will invite you to work with this definition, and give you an idea of
the use and abuse of probability in the world of finance, where practitioners often fail to account
properly for the chance of catastrophic events. The data in these exercises was obtained from
Wolfram Alpha (www.wolframalpha.com) and Index Fund Advisors (www.ifa.com).
Question 8: Let’s assume that you buy into an S&P 500 index fund for a certain amount of
money. In some circumstances it is appropriate to assume that your return on investment in
a month’s time will be normally distributed. Assume that the monthly returns for the last fifty
years are independently distributed and have followed a normal distribution with mean 0.87%
and standard deviation 4.33%. What is the pdf of expected percentage returns? What is the
variance? Assuming that I invested $10,000, what is the chance that I have more than $10,300
a month later? What’s the chance that I have between $9000 and $10,000 now, a month later?
Assume that in January 2009 the S&P lost roughly 11%. Under these assumptions, what’s the
chance that we have another month that is as bad or worse than this? In February 2009 the
S&P lost another 11%. Given the assumed independence, multiply through the probabilities to
estimate the likelihood that we had two straight months as bad as they were. Convert this to
a ‘We expect this bad luck once every n month pairs’ frequency statement. When we think of
other stock market crashes that have also occurred, clearly something is lost when we assume
that losses are uncorrelated in the mathematical models.
Question 9: Using the same numbers as above, write a short computer program to work out
what happens to the monthly returns after x months, answering the following questions. You
should be thinking about histograms and Monte Carlo methods. What is the pdf of expected
return on a $10,000 investment after 2 years? Just the plot is fine here. Compare this with the
pdf for 1 month returns. Assuming that I invested $10,000, what is the chance that I have more
than $10,300 two years later? What’s the chance that I have between $9000 and $10,000 now,
two years later? From October 2007 and through to February 2009 (so 17 months) the S&P
lost 52% in aggregate. Use your program to estimate the probability of this happening. Finally,
if you bought into the S&P fund in March 2009, the return on investment after 9 months would
have been a positive 53%. Assuming only Gaussian influences at work, what is the likelihood of
this? Convert your answers into rough (because the time periods are not exactly a year) ‘once
in a ... years’ statements. Note how the Monte Carlo approach is handy here – if we wanted to
answer these kind of questions analytically, we would need to be thinking about Markov chains
and Brownian motion, and end up having to solve Chapman-Kolmogorov-esque equations. NB:
You don’t need to include your code from this section – just explain what you did.
8We find Gaussian or normal statistics intuitive because it’s pretty common for us to see
stuff in the world around us that is clustered symmetrically around a mean value, and that
is exponentially suppressed the further away we get from that mean value. Because of this
ubiquity, and because of their nice analytic properties, Gaussian distributions are a seductive
first-pass when we come to model many situations.
As you should have seen, with rare events, often this facile assumption doesn’t cut it. Where
things go pear-shaped is usually in the tails of the distribution – the extreme events are in
reality more common than the naive assumptions predict. One way to correct for this is to use
a distribution with fatter tails, maybe a Cauchy distribution. We could also assume that at
some point in the the Gaussian distribution changes to a power law. Both of these corrections
take into account the relatively increased likelihood of extreme events. To see this, let’s imagine
two different pdfs with mean 0 and variance a
Here we have to take μ > 3 to get a sensible variance.
Question 10: By writing another Monte Carlo program or by brute forcing the maths, plot the
pdfs and then answer the following questions. Let y be the number of times we have to draw
from the distribution before we get an a result greater than or equal to Y . For different values
of Y expressed in terms of a sensible a value, give the resulting pdf, pY (y). In terms of the
standard deviation a in the normal distribution, how far into the tail of the power law p(x) do
we need to sample before we know that the probability of more extreme x values is less than
5%? How does this compare to the 95% confidence level in the normal distribution? If you
assumed that power laws more accurately modelled rare events in financial markets, how would
this knowledge affect your assessment of risk?
As we can see, the power law has fatter tails, in the sense that you will sample very large
magnitude events much more frequently than you would if they were governed by a Gaussian
distribution. The main thing to understand about power laws is that they are the distribution
where you can say that smaller (larger) events are more likely than larger (smaller) events, in
a very straight-forward and specific sense, and that this relative likelihood is preserved as you
move through the spectrum, or as you ‘stretch’ or scale the statistics. Let’s take p(x) = ax?k
so that x is the number of sites toppled in our sand pile model. What happens if we take x to
cx, so we redefine an avalanche to occur over say c = 2 two sites? Nothing really, the relative
statistics are just shifted by a constant factor:
In fact, this is one way that we can get a feel for power law behaviour in the real world – if
we see something with scaling statistics (like earthquakes!) we know that there is a power law
hiding somewhere. Equally, if we’re looking at financial time series and we see similar behaviour
at different timescales, we should be wary of jumping straight into assumptions about normal
distributions. A lot of these ideas are formalised in mathematics, physics and computer science
in the systematic study of scale-free networks.
9Finally, we come to the raison detre of self-organising criticality – a unifying explanation of
so-called ‘1/f noise’. 1/f noise is a bit of an umbrella term for phenomena which are distributed
according to a power law with exponent ?1. These phenomena typically depend heavily on
the past history in the system, and are therefore predictable in some sense, but are nonetheless
subject to chance and random fluctuations. From the 1960’s onwards, 1/f noise has been studied
fairly intensively, first in the context of metals and material science, and then in different
biological and evolutionary contexts too. Because it was appearing in a lot of different places,
it was and is tempting to think that there was some underlying pattern of events that just had
different physical expressions – in a philosophical sense, that there was some unifying framework
that could be used to explain disparate events. Sometimes you will see this temptation
referred to in the study of dynamical systems as ‘universality’ – the idea being that there’s a
class of systems that have broadly similar properties, independent of the detailed dynamics.
Lest you think this is a wishy-washy proposition, it’s this kind of thinking that underlies and
is formalised in studies of renormalisation and phase change (where you can compare water
boiling to iron magnetising, for example).
The Bak-Tang-Weisenfeld model was proposed to provide an easy-to-understand and universal
template for phenomenon that could give rise to 1/f noise. Subsequent work has shown that
you can map, in a precise and mathematically formal sense, systems which demonstrate 1/f
noise onto the sandpile model. Depending on your opinion of mathematical pre-eminence in
the world, we could say that self-oragnising critical systems demonstrate 1/f noise, or we could
say that systems demonstrate 1/f noise because they are self-oragnising in some sense. As you
study more physics, you might start to notice that this kind of mapping-onto-what-we-know
approach is taken quite a bit – like for example with the Ising model in statistical physics.
Question 11: Research and briefly describe the appearance of 1/f noise in a particular context
– for example in nerve cells or oxide films.
3.4 Characteristic functions
? Describe the concept of this: f(k) = R +∞
?∞ x
kp(x) dx. Apply it to a Gaussian pdf and to
a power-law pdf (which perhaps is Gaussian for small-ish x). Talk about how to generate
any of these numerically from data.
4 Get coding
The project will occur in three parts (plus an extension if you’re interested/time permitting).
The first is setting up a computational model of the original Bak-Tang-Wiesenfeld sandpile,
which was initially used to explain the self-oragnised-criticality exhibiting phenomenon ‘1/f
noise’. This initial part will essentially reproduce results from the paper P. Bak, C. Tang, K.
Wiesenfeld, ‘Self Ordered Criticality’, PRA July 1988, while familiarising you with the model.
The second part will involve playing with certain components of the model in order to change
the parameters it predicts, primarily the exponents of the power law distributions. Some of the
steps we undertake will increase the correspondence between our toy model and ‘real’ sandpile
– it will be important to analyse the key differences.
The final section is where you will cast your net more widely. You will obtain some real
data from trusted sources, and then make sure y
联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!