The science
behind the experiment.
Nine articles covering the research foundations, physical theory, experimental methodology, and honest epistemology of consciousness and random number generator research. Written to be read critically.
What the Science Shows
An honest accounting of mind-matter research — its claims, its critics, and what remains genuinely open
The Scientific Consensus Is Unambiguous
The mainstream scientific position on consciousness influencing physical reality through intention is clear: there is no credible, independently replicated evidence that it occurs. This is not a matter of institutional closed-mindedness — it is the conclusion reached after decades of investigation across physics, psychology, and neuroscience. No peer-reviewed scientific body, no national academy of sciences, and no physics institution has endorsed the hypothesis that conscious intention can measurably influence random physical systems. That position deserves to be stated plainly before anything else in this reading room.
This app is built on a hypothesis that sits outside that consensus. Users deserve to know that from the outset. The purpose of this article is not to foreclose exploration but to establish an honest evidentiary baseline — so that whatever you find in your own sessions, you can interpret it with the critical tools it deserves rather than the credulity the subject too often attracts.
The PEAR Lab: Rigorous Work, Contested Conclusions
The Princeton Engineering Anomalies Research laboratory ran from 1979 to 2007 under the direction of Robert Jahn, Dean Emeritus of Princeton's School of Engineering. Over 26 years, the lab accumulated one of the largest databases of mind-matter interaction experiments ever assembled — more than 2.5 million experimental series across 340 operators. It remains the most methodologically serious sustained investigation of the hypothesis this app explores. It is also the most thoroughly criticised.
The primary methodological objection concerned the statistical unit of analysis. PEAR counted experimental series as the unit and combined results across operators and conditions in ways critics argued inflated apparent effects. When stricter per-trial analyses were applied to the raw data, reported effect sizes diminished substantially. James Alcock's detailed critique identified further concerns: insufficient blinding between operators and experimenters, inadequate documentation of baseline stability, and analysis choices made after observing data distributions. These are exactly the practices that produce false positives in any scientific domain.
The most damaging evidence came from replication. When independent groups at the Freiburg Institute and the University of Amsterdam attempted to reproduce the core PEAR protocol under tighter controls, they found no significant effect. A 2006 meta-analysis by Bösch, Steinkamp and Boller is frequently cited as support — but it also found strong evidence of publication bias. When corrections for estimated file-drawer inflation were applied, the corrected effect size approached zero. The authors themselves described the evidence as weak.
The Global Consciousness Project: Pattern-Finding Under Scrutiny
The Global Consciousness Project (1998–present) extended the RNG methodology to a planetary scale: a network of hardware RNGs running continuously at sites worldwide, testing whether moments of collective global attention produce correlated deviations across the network. Its proponents report a cumulative z-score suggesting the network is more correlated during significant world events than during control periods.
The methodological problems are substantial. Event selection was not consistently pre-registered in the GCP's early years. When event boundaries and analysis windows are defined after data is observed, even partially, the false-positive rate rises substantially above its nominal value. The September 11 pre-event signal — a reported deviation beginning four hours before the attacks — has been examined critically by statisticians, who found it consistent with expected variation given the number of statistical tests run across the full dataset. The GCP data cannot currently be distinguished from what a well-designed null result would look like given its methodology.
Where Honest Uncertainty Actually Lives
After rigorous accounting, what remains? The effect sizes claimed by PEAR — approximately 0.1% mean shift from chance — fall below the threshold where most individual studies can reliably distinguish them from statistical artefacts. The independent replication record is poor. There is no theoretical framework in physics that predicts the magnitude, range, or mechanism of the claimed effect. What survives is a dataset that has not been fully explained away despite serious and sustained critical attention over four decades.
Two genuine scientific uncertainties are worth naming separately: the hard problem of consciousness remains unsolved, and the foundations of quantum mechanics are not fully resolved. These open questions do not make mind-matter interaction probable — but they mean the complete physical picture of how mind and matter relate is genuinely unfinished. Open is not the same as probable. It is not the same as impossible either.
Why This App Exists Given All of This
Personal longitudinal data, carefully and consistently collected, has value independent of whether the population-level hypothesis is true. If you accumulate 200 well-documented sessions and find that your z-scores reliably covary with your reported focus, sleep quality, or cardiac coherence — that is a genuine observation about your own data, whatever the underlying mechanism.
The app also provides practice in something rare: sustained engagement with irreducible uncertainty. The discipline of accumulating data that neither confirms nor refutes a hypothesis, weighting null results as honestly as positive ones, and resisting the human tendency to narrativise prematurely — these habits of mind have value that transcend whether any particular hypothesis is correct. If the PEAR-style effect is real, the only way to contribute to understanding it is through exactly the kind of careful, long-term, well-documented data collection this app supports. Null results reported honestly are as scientifically valuable as positive ones.
The Observer Effect
What quantum measurement actually means — and what it has never meant
The Observer Is Not a Mind
The most important thing to understand about the quantum mechanical observer effect is what "observer" actually means in physics. It does not mean a person. It does not mean consciousness. It does not mean awareness of any kind. In quantum mechanics, an observer is any physical system that interacts with a quantum system in a way that correlates its own state with the system's properties — a photon detector, a gas molecule, a magnetic field, a photographic plate. The moment physical information about a quantum system's state is encoded in any part of the environment, quantum behaviour is suppressed. No mind is required.
This distinction matters profoundly, because a significant portion of popular science writing has been built on a systematic misreading of the word "observer." When physicists say a quantum system is disturbed by observation, they mean it has physically interacted with another system that has effectively measured it. They do not mean anyone looked at it. These two meanings have been conflated in popular literature to produce claims that bear little resemblance to actual quantum physics — not a simplification, but a fundamental misrepresentation.
What the Double-Slit Experiment Actually Demonstrates
The double-slit experiment is the most cited example of quantum weirdness, and the most commonly misrepresented. When particles pass through two slits without any detector present, they produce an interference pattern — behaving as waves passing through both slits simultaneously. When a detector is placed to determine which slit each particle passes through, the interference pattern disappears.
The critical factor is not whether a conscious mind observes the detector output. It is whether the which-path information exists anywhere in the physical world — whether any physical record has been created, even if no human ever reads it. Experiments confirm this with careful controls. A detector that records which-path information but whose output is immediately destroyed before any human sees it still eliminates the interference pattern. The determining factor is physical correlation between systems, not awareness. Quantum eraser experiments make this concrete: "erasing" which-path information restores the interference pattern through physical manipulation of entangled photons, with no conscious decision as the causal mechanism.
The Consciousness-Collapse View: A Minority Position
The idea that consciousness causes wavefunction collapse was proposed by von Neumann in 1932 and developed by Wigner in the 1960s. Von Neumann's reasoning was real philosophy confronting a real problem: the measurement chain is all, in principle, quantum mechanical, and he could not find a principled physical location for the quantum-to-classical cut. He concluded it must be at the level of conscious experience. This was serious work by a great mathematician — not an experimental finding, and never empirically validated.
The subsequent development of quantum decoherence theory provided a more physically tractable explanation: quantum superpositions do not survive contact with macroscopic environments because quantum systems rapidly become entangled with enormous numbers of environmental degrees of freedom. The quantum-to-classical transition is driven by this environmental entanglement — not by any special act of observation or any property of minds. Today the consciousness-collapse view is a minority position even within quantum foundations: a 2019 survey at a foundations conference found roughly 10–15% of respondents endorsing consciousness-related interpretations.
Radin's Double-Slit Studies: Where They Stand
Physicist Dean Radin and colleagues published a series of studies from 2012 onward claiming that meditators directing focused attention toward a double-slit apparatus produced small reductions in interference fringe visibility — as if conscious observation were partially collapsing the quantum state. These studies are frequently cited as experimental evidence that consciousness affects quantum systems.
They do not meet the evidentiary standard for that conclusion. The studies have not been independently replicated under fully blinded conditions. An independent replication attempt by Guerrer (2019) found no significant effect. The original studies have methodological concerns including insufficient blinding, small sample sizes, and analysis flexibility. The evidentiary bar for "consciousness directly alters quantum interference patterns" is, necessarily, very high. Preliminary, unreplicated studies from a single lab do not reach it.
What the Honest Picture Looks Like
There is currently no scientific evidence that human minds operate at quantum mechanical scales, that consciousness influences quantum measurement outcomes, or that the observer effect has ever required a conscious observer. These are not contested empirical claims in active scientific dispute — they are claims for which supporting evidence is, at present, simply absent.
The measurement problem is a genuine unresolved issue in physics. The hard problem of consciousness is a genuine unresolved issue in philosophy. But neither "we don't fully understand wavefunction collapse" nor "we don't fully understand subjective experience" constitutes evidence that consciousness influences quantum systems. The hypothesis this app explores does not need quantum mysticism to be worth investigating seriously. If it is ever validated, it will require its own physics — not a misreading of existing physics.
Heart Fields & Coherence
HeartMath, HRV science, and the electromagnetic heart
The Heart as an Electromagnetic Organ
The heart generates the strongest electromagnetic field in the human body — roughly 100 times greater in electrical amplitude than the brain's electroencephalographic signal, and up to 5,000 times stronger in magnetic field strength. This is standard biophysics, confirmed by magnetocardiography (MCG), a clinical imaging modality that maps cardiac magnetic fields using superconducting quantum interference devices (SQUIDs). The cardiac magnetic field is measurable several feet from the body, and under laboratory conditions has been detected at distances of one to two metres from the chest.
The electrical component is generated by the coordinated depolarisation of approximately two billion cardiomyocytes with each heartbeat, producing a dipole field that propagates through the body's conductive tissues. The heart is the body's dominant rhythmic electromagnetic oscillator, and its variability pattern is a sensitive real-time index of autonomic nervous system state, emotional regulation, and cognitive function. This is established clinical science, taught in cardiology and used in biofeedback medicine worldwide.
Heart Rate Variability: The Science
Heart rate variability (HRV) — the beat-to-beat variation in the interval between successive heartbeats — is one of the most information-dense physiological signals accessible from the body surface. A healthy resting heart does not beat with metronomic regularity; its interval fluctuates continuously in response to respiration, blood pressure regulation, thermoregulation, emotional state, and cognitive load. These fluctuations are not noise — they are the signature of a dynamically responsive autonomic nervous system.
HRV is analysed in several domains. Time-domain measures such as RMSSD (root mean square of successive differences between adjacent R-R intervals) capture short-term parasympathetic modulation. Frequency-domain analysis decomposes the HRV signal into spectral components: high-frequency power (0.15–0.40 Hz, corresponding to respiratory sinus arrhythmia), low-frequency power (0.04–0.15 Hz, reflecting sympathetic and parasympathetic activity), and very-low-frequency power (below 0.04 Hz, associated with thermoregulation and hormonal rhythms). Reduced HRV is an independent predictor of cardiac mortality, an early marker in diabetic autonomic neuropathy, and a sensitive indicator of psychological stress. High resting HRV is associated with better executive function, greater emotional flexibility, and improved immune function. This is the consensus of decades of clinical cardiology — not HeartMath-specific research.
Coherence: What HeartMath Actually Measured
The HeartMath Institute, founded in 1991, built a research programme around a specific feature of the HRV signal: coherence. In HeartMath's framework, cardiac coherence refers not to HRV magnitude per se but to its rhythmic orderliness — specifically, the degree to which a single dominant frequency organises the HRV power spectrum into a smooth, regular oscillation. When individuals breathe at approximately 0.1 Hz (five to six breaths per minute), heart rate naturally entrains to the respiratory rhythm, producing a prominent peak in the HRV power spectrum at 0.1 Hz. This resonance frequency is the natural oscillation frequency of the baroreflex loop — the feedback system that regulates blood pressure through heart rate adjustment.
When breathing and heart rate oscillate together at this frequency, the autonomic nervous system operates in a high-efficiency state characterised by parasympathetic dominance, reduced cortisol and adrenaline, and improved prefrontal cortical blood flow. HeartMath's published studies documented that the high-coherence state was associated with significant reductions in perceived stress, improvements in cognitive test performance, better emotional regulation on standardised assessments, and immunological changes including increased salivary IgA. These core HRV coherence findings have been replicated by independent groups and form the basis of clinical biofeedback practice used in hospital settings.
The Global Coherence Initiative
HeartMath's more speculative research concerns the hypothesis that collective human emotional states produce detectable changes in Earth's geomagnetic field, and that geomagnetic activity influences human physiological states. In 2003, HeartMath launched the Global Coherence Initiative, installing magnetometers at sites worldwide to continuously monitor geomagnetic field variations. The geomagnetic field fluctuates in response to solar wind, atmospheric electrical activity, and the Schumann resonances — global electromagnetic modes of the Earth-ionosphere cavity resonating at approximately 7.83 Hz and its harmonics.
HeartMath has also published research suggesting individual HRV coherence correlates with geomagnetic conditions — that high-solar-activity periods correspond to changes in population-level HRV distributions. Some correlations are statistically significant in the published analyses; independent replication under controlled conditions is limited. Critics point to event-selection flexibility and the difficulty of distinguishing genuine effects from expected statistical variation across a continuously running measurement network. The Global Coherence Initiative represents a serious attempt to test a genuinely interesting hypothesis; its findings are preliminary and require broader independent scrutiny.
Bioelectromagnetics and Distant Intention
HeartMath researchers conducted several studies in which pairs of participants generated feelings of appreciation directed toward a nearby partner, with the receiving partner's brain activity and HRV measured without their knowledge of when the sender was actively directing attention. The studies reported synchronisation of the receiving partner's HRV and brainwave patterns with the sender's during active intention periods — termed "cardioelectromagnetic communication." William Braud and Marilyn Schlitz conducted an earlier programme examining distant mental influence on the electrodermal activity of unaware subjects, finding small but reportedly replicable effects across hundreds of trials at multiple independent laboratories.
Criticisms focus on blinding procedures and publication bias. Supporters argue that the cumulative evidence across multiple independent research groups, while individually modest, is harder to dismiss than any single study. The proposed electromagnetic transmission mechanism also faces the physical challenge that known cardiac magnetic fields attenuate with distance far too rapidly to account for effects reported across metres. If such effects exist, their mechanism is not electromagnetic in any conventional sense. This same statistical logic applies to your own longitudinal data: single sessions are nearly uninformative. Patterns across dozens of sessions under consistent conditions are where interpretable signal, if any exists, will emerge.
What the Evidence Does and Doesn't Establish
The HRV coherence research is the most scientifically secure component of this field — taught in cardiology, replicated across dozens of independent laboratories, and clinically applied in hospital biofeedback programmes. That the heart's electrical output encodes information about autonomic state, emotional regulation, and cognitive function is not contested. That deliberately induced cardiac coherence through slow paced breathing produces measurable physiological benefits is well supported.
The Global Coherence Initiative's hypothesis — that collective emotional states produce geomagnetic effects — is biologically plausible in certain specific respects but remains unsupported by evidence meeting the necessary methodological standard. The distant-intention research is the most speculative component. None of this constitutes proof of mind-matter interaction. What this body of work demonstrates is that the heart is a far more complex and informationally rich organ than classical biology appreciated — and that the boundary between physiological self-regulation and environmental influence on that regulation is an active area of genuine scientific investigation.
Quantum Interpretations
What collapse means — and whether time flows one way
Why Interpretation Matters
Quantum mechanics is extraordinarily successful as a predictive tool. Every experiment it has been asked to predict, it has predicted correctly — often to ten or more decimal places. But the theory's mathematical formalism is silent on one question: what is actually happening between measurements? This silence has given rise to a family of competing interpretations — not competing predictions, since they all agree on what will be measured, but competing pictures of what quantum reality is like when no one is looking.
Copenhagen: Pragmatic Silence
The Copenhagen interpretation, developed by Niels Bohr and Werner Heisenberg in the late 1920s, is the oldest and most widely taught. Its core instruction is almost anti-philosophical: do not ask what the quantum system is doing between measurements, because that question has no physical meaning. Only measurement outcomes are real. The wavefunction is a calculational tool, not a description of a real physical object.
This works perfectly as a recipe for doing physics. It becomes uncomfortable when you ask where "measurement" begins and ends, whether a detector counts as an observer, whether a recording device counts, or whether human consciousness is required. Copenhagen's pragmatism leaves these questions unanswered — by design.
Many Worlds: Everything Happens
The Many Worlds interpretation, proposed by Hugh Everett in 1957, takes the opposite approach: the wavefunction never collapses. Every time a quantum event has multiple possible outcomes, the universe literally branches — all outcomes occur, in different branches of a universal wavefunction. There is no collapse, no special role for observers, no measurement problem.
Many Worlds is mathematically elegant and avoids the measurement problem entirely. It has no room for mind-matter interaction of the kind RNG research proposes: in Many Worlds, you cannot influence which outcome occurs because all outcomes already occur.
The Transactional Interpretation and Retrocausality
The Transactional Interpretation, developed by physicist John Cramer in 1986, offers the most direct theoretical support for the retrocausal phenomena observed in PEAR's retrocognitive trials. It proposes that quantum events involve two waves: a forward-in-time "offer wave" emitted by a quantum source, and a backward-in-time "confirmation wave" emitted by the absorber. A quantum event is the "handshake" between these two waves — a transaction that spans time in both directions.
This is not a fringe position. The mathematics is rigorous and consistent with all standard quantum predictions. Retrocausality — the influence of future states on past ones — is not forbidden by physics; it is forbidden by our intuitions about time. At the quantum level, the laws of physics are time-symmetric.
The Delayed-Choice Experiment
Wheeler's delayed-choice experiment makes retrocausality tangible. In the standard double-slit setup, the experimenter decides whether to observe which slit the photon passes through before it reaches the slits. Wheeler showed — and experimenters later confirmed — that this decision can be made after the photon has already passed through the slits. The photon responds to a measurement decision that had not yet been made at the time it "chose" its path.
In a cosmic variant, Wheeler proposed using gravitational lensing from a quasar to create two paths for photons that left the quasar billions of years ago. A measurement decision made today, in principle, determines whether those ancient photons travelled as waves or particles on their billion-year journey. This is not a metaphor. It is experimental physics. The past is not fully fixed until future observations are made — at the quantum level.
What This Means for the Three RNG Protocols
In Live Quantum mode, numbers are generated at the exact tick they are consumed. If intention affects the output, the most straightforward account is real-time mind-matter interaction — present consciousness influencing the outcome of quantum events occurring simultaneously.
In Buffered Quantum mode, numbers are generated before your session begins. If deviations appear, a real-time causal account is ruled out. The Transactional Interpretation offers the only mainstream theoretical framework that accommodates this: your present intention reaching backward via a quantum handshake.
In Local Entropy mode, numbers are seeded from physical device entropy. If deviations occur in local entropy sessions and correlate with your magnetometer and RF data, you have multiple independent physical systems deviating simultaneously — the kind of multi-stream convergence that is hardest to explain as artefact.
The RNG Hypothesis
Three protocols, three experimental questions
What Random Numbers Have to Do With Consciousness
The use of random number generators in consciousness research traces to a simple but profound observation: if mind can influence matter, the most detectable signature would appear in systems that are otherwise governed by pure chance. A deterministic system is hard to "move" in a statistically detectable way because its behaviour is expected and reproducible. A random system, by contrast, has no expected trajectory. Any systematic deviation from chance behaviour stands out against the statistical background.
Live Quantum: Direct Causation
In Live Quantum mode, each tick of the session requests a fresh batch of quantum random numbers from the Cisco Outshift server. These numbers are generated from genuine quantum physical processes — their values are undetermined until the moment of measurement, collapsing from superposition at the exact instant they are requested. If intention affects the output of this mode, the most straightforward account is real-time mind-matter interaction: present consciousness influencing the outcome of quantum events occurring simultaneously.
Buffered Quantum: Testing Retrocausality
In Buffered Quantum mode, a batch of quantum numbers is fetched once at the start of your session. The quantum events that produced those numbers have already occurred. You then spend your session applying intention to a data stream that physically predates it. This is not a limitation. It is a different experimental question. If deviations appear in buffered sessions, they cannot be explained by real-time mind-matter interaction. Retrocausality follows naturally from the time-symmetry of quantum mechanics and was specifically documented in PEAR's retrocognitive trials.
Local Entropy: Environmental Coupling
Local entropy mode uses the device's cryptographic random number generator, seeded from hardware entropy: CPU timing jitter, thermal noise from silicon, micro-fluctuations from sensors, network timing events, and touch input. The entropy pool is sourced in part from the same physical environment you are measuring with your other sensors. If your intention is producing measurable changes in the physical environment — subtle changes in movement, electromagnetic field, or device thermal state — those changes could propagate into the entropy pool and produce deviations in the local entropy stream.
A local entropy deviation that coincides with a magnetometer deviation and a WiFi RSSI fluctuation is not three independent events pointing at the same cause. It may be one physical effect appearing across three measurement channels simultaneously — which is both more explicable than quantum effects and potentially more meaningful as evidence of environmental coupling.
Stochastic Resonance and Why Noise Matters
There is a counterintuitive phenomenon in physics called stochastic resonance: the addition of noise to a system can actually improve its ability to detect weak signals. This has been documented in neuroscience, ecology, and electronic signal processing. If consciousness produces an extremely weak ordering effect on physical systems — too weak to detect reliably in any single measurement — then a system that couples many physical noise sources together might amplify that weak signal. Local entropy, by harvesting noise from many physical subsystems simultaneously, is precisely such a coupled system.
The Science of Consciousness
What we know, what we don't, and why it matters
The Hard Problem
In 1995, philosopher David Chalmers drew a distinction that has framed the field ever since. The "easy problems" of consciousness — explaining how the brain processes information, integrates sensory data, controls behaviour, reports on its own states — are tractable scientific questions. The "hard problem" is different: why is there subjective experience at all? Why does information processing feel like something? No explanation of neural mechanisms, however complete, seems to close this gap.
If we do not understand what consciousness fundamentally is, we are in no position to rule out that it has physical effects we have not yet accounted for.
Integrated Information Theory
One of the most mathematically developed theories of consciousness is Integrated Information Theory (IIT), proposed by neuroscientist Giulio Tononi. IIT defines consciousness in terms of phi (Φ) — a measure of how much information is generated by a system as a whole, above and beyond the information generated by its parts independently. Crucially, IIT predicts that consciousness is a fundamental property of information-integrating systems — not something that requires biology specifically, and not an epiphenomenon.
Orchestrated Objective Reduction
Physicist Roger Penrose and anaesthesiologist Stuart Hameroff proposed the Orchestrated Objective Reduction (Orch-OR) theory, arguing that consciousness arises from quantum computations in microtubules inside neurons. Each quantum collapse event corresponds to a moment of conscious experience. The warm, wet environment of neurons is generally considered too noisy for quantum coherence — though recent findings on quantum coherence in photosynthesis and bird navigation suggest this objection may be weaker than assumed.
Global Workspace Theory
Global Workspace Theory (GWT), developed by Bernard Baars and formalised computationally by Stanislas Dehaene, proposes that consciousness arises when information is "broadcast" widely across the brain via a global workspace. GWT has substantial empirical support from neuroimaging and clinical studies. It does not leave obvious room for mind-matter interaction effects — it simply does not accommodate them. This is the tension at the centre of this research area: the most empirically supported theories of consciousness do not explain the most intriguing anomalous data.
Panpsychism and Broader Frameworks
A minority but growing position in philosophy of mind is panpsychism — the view that some form of experience or proto-experience is a fundamental feature of reality, present at all levels of organisation. Philosophers including David Chalmers, Galen Strawson, and Philip Goff have argued for versions of this position as the most coherent solution to the hard problem. If mind is a fundamental aspect of information and matter, the sharp boundary between "inner" conscious states and "outer" physical systems becomes less clear.
Where This Leaves the Research
No theory of consciousness currently offers a satisfying mechanistic account of how intention might influence a random number generator. This is the honest state of the field. What the science does establish: consciousness is not understood. Its relationship to physical processes is not understood. The hard problem has not been solved. The most mathematically rigorous theory (IIT) implies that consciousness is a fundamental physical property. The most experimentally confirmed quantum theory contains an unresolved measurement problem. And a substantial empirical database from PEAR, the GCP, and related research documents small but persistent anomalous effects. The Advanced Lab gives you instruments to ask these questions personally, with your own data.
Running a Good Session
Environment, intention, and experimental discipline
Why Protocol Matters
The quality of your data is determined almost entirely by the consistency and care of your experimental protocol. The PEAR lab's most important methodological contribution was not its statistical analysis — it was its rigorous standardisation of experimental conditions across 26 years and 340 operators. That consistency is what allowed meaningful patterns to emerge from the noise. Consumer devices introduce noise sources that laboratory equipment does not: electromagnetic interference, accelerometer coupling, varying thermal states, battery fluctuations, network instability. You cannot eliminate these, but you can minimise them through protocol.
Environment: Magnetic Stability
Choose a location away from speakers and headphones (the magnets in transducers are strong enough to measurably deflect a phone magnetometer from a metre away); computers and monitors; microwaves and induction cooktops; large metal furniture that shifts position; vehicles passing very close. Before beginning, hold the device still for 30 seconds. If the magnetometer reading is drifting continuously rather than holding approximately steady, move away from the source of that drift.
Environment: Acoustic and RF Stability
A session conducted in a room with consistent background ambient sound is preferable to one with unpredictable acoustic events. Your baseline phase calibrates to the ambient acoustic level; abrupt changes during the experiment phase register as deviations. WiFi and Bluetooth RSSI measurements are sensitive to other devices connecting or disconnecting. For RF channel integrity, conduct sessions in a location where the number and distance of active wireless devices is approximately constant.
Physical Stillness
Seated comfortably with the device resting on a flat surface in front of you is the standard posture. The device should not be held in your hands during recording — hand tremor, grip pressure variation, and body heat transfer from palms to device all introduce noise. If you observe that your high-stillness sessions consistently produce different z-score distributions from your low-stillness sessions, that itself is interpretable data.
Intention: Framing the Mental Posture
The PEAR lab documented a consistent finding: operators who produced the strongest effects typically described their mental state not as effortful will or forceful pushing, but as a kind of relaxed resonance with the device — a feeling of being in rapport with the system rather than commanding it. Articulate your intention clearly in the pre-session note before starting. Once the session begins, do not monitor the real-time RNG output obsessively — this tends to produce feedback loops that disrupt stable intention.
Session Duration and Statistical Power
Longer sessions generate more RNG samples, increasing statistical power. The relationship follows square-root scaling: doubling your session duration increases sensitivity by approximately 1.41. A 10-minute session generates enough samples for a z-score with reasonable sensitivity to PEAR-range effects. Sessions shorter than 8 minutes produce z-scores with wide confidence intervals. For most purposes, 15–25 minutes is the practical optimum: long enough for meaningful statistics, short enough to maintain focused intention throughout.
Control Sessions and Baseline Comparison
The most rigorous personal experimental design includes control sessions: sessions run under identical environmental conditions, same duration, same device placement — but with a deliberate absence of intention. If your intention sessions consistently produce higher z-scores than your control sessions across a matched set, that is a within-person controlled comparison. Use the session intention field consistently ("no intention" vs your specific intention phrase) to enable this comparison in your own vault records.
The Value of Null Results
Why finding nothing is finding something
The Most Undervalued Outcome in Science
A null result is a session in which your RNG output stays close to theoretical chance — z-scores hovering near zero, no consistent directional bias. In everyday language, "nothing happened." In scientific language, a null result is a result. It is data. It carries exactly as much information as a positive result — sometimes more.
The historical record of parapsychology research has been severely distorted by the file drawer problem: positive results get submitted for publication; null results sit unpublished. This creates a systematic bias toward positive findings. In your own longitudinal data, the equivalent bias would be if you unconsciously gave more weight to sessions with striking z-scores and dismissed sessions where nothing unusual appeared. This is a natural cognitive tendency — it is also the most direct route to fooling yourself.
What a Null Result Actually Tells You
A consistent pattern of null results contains several possible interpretations: the effect does not exist for you (PEAR documented strong individual variation — some operators showed nothing across many trials); the effect exists but your current protocol is not sensitive to it; the effect is real but inconsistent (many biological phenomena show intra-individual variability); or the effect does not exist in the way the hypothesis frames it. A sustained series of well-conducted sessions with consistently null results is evidence against the strong version of the hypothesis.
The Replication Standard
Modern science has grappled seriously with replication since the 2010s replication crisis, in which a systematic attempt to replicate 100 published psychology studies found fewer than 40% reproduced their original findings. The standard that emerged: findings should be pre-registered, conducted with adequate statistical power, and independently replicated. For your personal data, the replication standard translates to: do not interpret a single striking session as confirmation. Build toward a personal dataset large enough that the pattern, if real, emerges reliably.
Bayes, Not Binary
The conventional p-value framework is frequently misunderstood. A p-value below 0.05 does not mean there is a 95% probability that the effect is real. It means that if pure chance were operating, you would observe a result this extreme about 5% of the time. Bayesian reasoning is more coherent for personal longitudinal data: given your prior belief about the probability of this effect, and given this new data, what should your updated belief be? Each session shifts your probability estimate. Accumulate evidence across many sessions, weight null results equally with positive results, and let your interpretation evolve as the data warrants.
The Pragmatics of Sitting With Uncertainty
Running experiments with genuinely uncertain outcomes over an extended period requires cultivating comfort with ambiguity and active resistance to premature interpretation. The researchers at PEAR who produced the most scientifically credible work were those who maintained this posture across decades — accumulating data without narrativising prematurely, publishing null results alongside positive ones, and describing findings in terms of statistical tendencies rather than dramatic claims. A null result honestly recorded is more scientifically valuable than a striking result dishonestly framed.
Glossary of Key Terms
Precise definitions for concepts used throughout
Z-Score
A z-score expresses how far a given observation sits from the mean of a distribution, measured in standard deviations. For RNG data: z = (observed ones − n/2) / (√n / 2), where n is the total number of RNG ticks in the session. Under pure chance, approximately 68% of sessions produce a z-score between −1 and +1; 95% between −2 and +2; 99.7% between −3 and +3. A z-score of ±1.96 corresponds to p = 0.05. The z-score is sample-size-dependent — the same observed deviation percentage produces a larger z-score in a long session than a short one.
Entropy and the Entropy Pool
In information theory, entropy (Shannon entropy) measures the average unpredictability in a source of data. In computing, an entropy pool is a buffer maintained by the operating system that accumulates unpredictable bits from physical hardware events: the precise timing of touchscreen events, interrupt latencies from hardware components, thermal noise sampled from CPU and sensor circuits, and network packet arrival timing. The entropy pool is continuously replenished as events occur. This app's local entropy mode draws from this pool — and because the pool is seeded from physical sensors, its content is not strictly independent of the device's physical environment.
Quantum Random Number Generator (QRNG)
A QRNG produces random numbers from a fundamentally quantum physical process — one where the outcome is genuinely undetermined prior to measurement, not merely practically unpredictable. The Cisco Outshift QRNG service used by this app measures vacuum fluctuations — the irreducible quantum noise present in the electromagnetic field, a direct physical consequence of the Heisenberg uncertainty principle. The key property distinguishing a QRNG: the quantum output is statistically independent of anything occurring on or near your device.
Wavefunction and Wavefunction Collapse
A wavefunction (ψ) is a complex-valued function encoding the complete quantum state of a system. Before measurement, a quantum system in a superposition state does not have a definite value for the measured quantity — the wavefunction represents the coexistence of multiple possibilities simultaneously, not merely uncertainty about a pre-existing definite value. Wavefunction collapse refers to the apparent transition from superposition to a single definite outcome upon measurement. What physically causes this collapse remains the measurement problem — unresolved in the foundations of physics.
Retrocausality
Retrocausality refers to the influence of a later event on an earlier one — causation running backward in time. This temporal asymmetry is not a fundamental law of physics — it is a statistical consequence of thermodynamics. At the level of individual particle interactions, the fundamental laws of physics are time-symmetric: they work equally well in both temporal directions. The transactional interpretation is built around retrocausality. The delayed-choice experiment demonstrates that future measurement decisions appear to influence the past behaviour of photons. These are confirmed, textbook quantum phenomena.
Coherence (HRV and Session)
HRV coherence (HeartMath) refers to a specific pattern in heart rate variability in which beat-to-beat intervals oscillate smoothly at around 0.1 Hz. High HRV coherence is associated with parasympathetic dominance, reduced stress hormones, and improved prefrontal function. It can be induced through slow breathing at approximately 5–6 breaths per minute. Session coherence score (this app) is a separate composite metric measuring experimental condition quality: physical stillness, environmental magnetic stability, RNG source quality, and sensor availability. A high session coherence score indicates the data from that session can be more reliably interpreted.
Statistical Significance and p-Value
A p-value is the probability of observing a result at least as extreme as the one obtained, assuming the null hypothesis is true. A p-value is not the probability that the null hypothesis is true, nor the probability that the effect is real. These are common misinterpretations. A trivially small effect can achieve p < 0.001 with a large enough sample. Conversely, a genuine large effect can fail to reach p < 0.05 in a small sample, not because the effect is absent but because the data is insufficient. For personal longitudinal data, the pattern of z-scores over time is more informative than whether any single session crosses a threshold.
Decoherence
Quantum decoherence is the process by which quantum superpositions are destroyed through interaction with the environment. When a quantum system interacts with a large number of environmental degrees of freedom, information about its quantum phase relationships leaks into the environment. The system appears to behave classically. Decoherence is fast for macroscopic objects, but recent observations of quantum coherence in photosynthetic light harvesting, avian magnetic navigation, and enzyme catalysis suggest that biological systems may have mechanisms for maintaining coherence that are not yet fully understood.
Effect Size
Effect size is a quantitative measure of the magnitude of a phenomenon, independent of sample size. The PEAR laboratory documented a mean effect size of approximately 0.0003 — a shift of 0.03% from the theoretical mean. This is extraordinarily small: it required millions of trials accumulated over 26 years to achieve statistical significance. No single session of any practical duration will reliably detect an effect that small. If mind-matter interaction effects exist at the scale documented by PEAR, they are inherently a longitudinal, aggregate phenomenon. This is why the Vault and trend analysis exist in this app.