Page 212 - Contributed Paper Session (CPS) - Volume 8
P. 212
CPS2254 Dan C.
around the classroom: Each paper recorded one “trial” of their successful toss
of sixteen “raindrops”. After having at least thirty trials recorded and up around
the room (all on identically-sized recording paper grids), we then entered in a
period of reflection: In particular, students were asked what they noticed, and
what they wondered about.
In this phase of generating new questions to pursue, the first thing many
students noticed was that none of the experimental results looked like the
typical “one raindrop per tile, perfectly centered in each square” which so
many had suggested beforehand. In fact, soon the observation arose that
most if not all of the grids up on display were without a “one raindrop per tile”
result (let alone the idea of being perfectly centered). This led to the obvious
connection: If there wasn’t “one raindrop per tile” on a grid, then by necessity
there must be some empty squares on that grid. Students began to wonder
how many empty squares were among their displayed experimental results:
What was the most and least number of empty squares? What was the most
number of raindrops in any given square?
As students tabulated different aspects they were interested in, based on
the questions about the results they raised, the notion of likelihood came up
by wondering what would happen if we repeated the whole experiment on
another day? The language of a “batch” of results was used to describe how
many trials were on display: For example, if there were thirty grids of
experimental results, we just called it a batch of thirty “trials”. If, at another
time, we generated a new batch of thirty results, how would students think the
new batch would compare to the initial batch? As an example of a specific
observation, students saw in their initial batch a grid with five empty squares,
which seemed surprising to them: Would we expect to see such a grid in
another batch of thirty results?
During the next part of the intervention, occurring on a different day,
instead of generating more data using physical experimentation, the dynamic
software “Fathom” was used (Finzer, 2000). A simulation was created in
Fathom that randomly placed sixteen dots on a 4 x 4 grid, and by toggling the
animation feature, a single “trial” would unfold so the dots could slowly
appear. In showing students the animation of a single trial, it was vital for
students to question the veracity of the displayed result: How could they be
sure the computer was doing it correctly? More salient was the question: Did
the Fathom results look reasonable when compared to what the students had
just done physically? Figure 1 has an example of the end result for a single
trial.
After some discussion that led to the class accepting Fathom as being just
as unpredictable as their physical models, we then were able to use Fathom to
look at many trials, very quickly. In fact, whereas we had previously displayed
201 | I S I W S C 2 0 1 9