11 resultados para Source Modeling
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The neurocognitive processes underlying the formation and maintenance of paranormal beliefs are important for understanding schizotypal ideation. Behavioral studies indicated that both schizotypal and paranormal ideation are based on an overreliance on the right hemisphere, whose coarse rather than focussed semantic processing may favor the emergence of 'loose' and 'uncommon' associations. To elucidate the electrophysiological basis of these behavioral observations, 35-channel resting EEG was recorded in pre-screened female strong believers and disbelievers during resting baseline. EEG data were subjected to FFT-Dipole-Approximation analysis, a reference-free frequency-domain dipole source modeling, and Regional (hemispheric) Omega Complexity analysis, a linear approach estimating the complexity of the trajectories of momentary EEG map series in state space. Compared to disbelievers, believers showed: more right-located sources of the beta2 band (18.5-21 Hz, excitatory activity); reduced interhemispheric differences in Omega complexity values; higher scores on the Magical Ideation scale; more general negative affect; and more hypnagogic-like reveries after a 4-min eyes-closed resting period. Thus, subjects differing in their declared paranormal belief displayed different active, cerebral neural populations during resting, task-free conditions. As hypothesized, believers showed relatively higher right hemispheric activation and reduced hemispheric asymmetry of functional complexity. These markers may constitute the neurophysiological basis for paranormal and schizotypal ideation.
Resumo:
Meditation is a self-induced and willfully initiated practice that alters the state of consciousness. The meditation practice of Zazen, like many other meditation practices, aims at disregarding intrusive thoughts while controlling body posture. It is an open monitoring meditation characterized by detached moment-to-moment awareness and reduced conceptual thinking and self-reference. Which brain areas differ in electric activity during Zazen compared to task-free resting? Since scalp electroencephalography (EEG) waveforms are reference-dependent, conclusions about the localization of active brain areas are ambiguous. Computing intracerebral source models from the scalp EEG data solves this problem. In the present study, we applied source modeling using low resolution brain electromagnetic tomography (LORETA) to 58-channel scalp EEG data recorded from 15 experienced Zen meditators during Zazen and no-task resting. Zazen compared to no-task resting showed increased alpha-1 and alpha-2 frequency activity in an exclusively right-lateralized cluster extending from prefrontal areas including the insula to parts of the somatosensory and motor cortices and temporal areas. Zazen also showed decreased alpha and beta-2 activity in the left angular gyrus and decreased beta-1 and beta-2 activity in a large bilateral posterior cluster comprising the visual cortex, the posterior cingulate cortex and the parietal cortex. The results include parts of the default mode network and suggest enhanced automatic memory and emotion processing, reduced conceptual thinking and self-reference on a less judgmental, i.e., more detached moment-to-moment basis during Zazen compared to no-task resting.
Resumo:
Brian electric activity is viewed as sequences of momentary maps of potential distribution. Frequency-domain source modeling, estimation of the complexity of the trajectory of the mapped brain field distributions in state space, and microstate parsing were used as analysis tools. Input-presentation as well as task-free (spontaneous thought) data collection paradigms were employed. We found: Alpha EEG field strength is more affected by visualizing mentation than by abstract mentation, both input-driven as well as self-generated. There are different neuronal populations and brain locations of the electric generators for different temporal frequencies of the brain field. Different alpha frequencies execute different brain functions as revealed by canonical correlations with mentation profiles. Different modes of mentation engage the same temporal frequencies at different brain locations. The basic structure of alpha electric fields implies inhomogeneity over time — alpha consists of concatenated global microstates in the sub-second range, characterized by quasi-stable field topographies, and rapid transitions between the microstates. In general, brain activity is strongly discontinuous, indicating that parsing into field landscape-defined microstates is appropriate. Different modes of spontaneous and induced mentation are associated with different brain electric microstates; these are proposed as candidates for psychophysiological ``atoms of thought''.
Resumo:
A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.
Resumo:
Our knowledge about the lunar environment is based on a large volume of ground-based, remote, and in situ observations. These observations have been conducted at different times and sampled different pieces of such a complex system as the surface-bound exosphere of the Moon. Numerical modeling is the tool that can link results of these separate observations into a single picture. Being validated against previous measurements, models can be used for predictions and interpretation of future observations results. In this paper we present a kinetic model of the sodium exosphere of the Moon as well as results of its validation against a set of ground-based and remote observations. The unique characteristic of the model is that it takes the orbital motion of the Moon and the Earth into consideration and simulates both the exosphere as well as the sodium tail self-consistently. The extended computational domain covers the part of the Earth’s orbit at new Moon, which allows us to study the effect of Earth’s gravity on the lunar sodium tail. The model is fitted to a set of ground-based and remote observations by tuning sodium source rate as well as values of sticking, and accommodation coefficients. The best agreement of the model results with the observations is reached when all sodium atoms returning from the exosphere stick to the surface and the net sodium escape rate is about 5.3 × 1022 s−1.
Resumo:
The spectacular images of Comet 103P/Hartley 2 recorded by the Medium Resolution Instrument (MRI) and High Resolution Instrument (HRI) on board of the Extrasolar Planet Observation and Deep Impact Extended Investigation (EPOXI) spacecraft, as the Deep Impact extended mission, revealed that its bi-lobed very active nucleus outgasses volatiles heterogeneously. Indeed, CO2 is the primary driver of activity by dragging out chunks of pure ice out of the nucleus from the sub-solar lobe that appear to be the main source of water in Hartley 2's coma by sublimating slowly as they go away from the nucleus. However, water vapor is released by direct sublimation of the nucleus at the waist without any significant amount of either CO2 or icy grains. The coma structure for a comet with such areas of diverse chemistry differs from the usual models where gases are produced in a homogeneous way from the surface. We use the fully kinetic Direct Simulation Monte Carlo model of Tenishev et al. (Tenishev, V.M., Combi, M.R., Davidsson, B. [2008]. Astrophys. J. 685, 659-677; Tenishev, V.M., Combi, M.R., Rubin, M. [2011]. Astrophys. J. 732, 104-120) applied to Comet 103P/Hartley 2 including sublimating icy grains to reproduce the observations made by EPOXI and ground-based measurements. A realistic bi-lobed nucleus with a succession of active areas with different chemistry was included in the model enabling us to study in details the coma of Hartley 2. The different gas production rates from each area were found by fitting the spectra computed using a line-by-line non-LTE radiative transfer model to the HRI observations. The presence of icy grains with long lifetimes, which are pushed anti-sunward by radiation pressure, explains the observed OH asymmetry with enhancement on the night side of the coma.
Resumo:
In order to analyze software systems, it is necessary to model them. Static software models are commonly imported by parsing source code and related data. Unfortunately, building custom parsers for most programming languages is a non-trivial endeavour. This poses a major bottleneck for analyzing software systems programmed in languages for which importers do not already exist. Luckily, initial software models do not require detailed parsers, so it is possible to start analysis with a coarse-grained importer, which is then gradually refined. In this paper we propose an approach to "agile modeling" that exploits island grammars to extract initial coarse-grained models, parser combinators to enable gradual refinement of model importers, and various heuristics to recognize language structure, keywords and other language artifacts.
Resumo:
How do probabilistic models represent their targets and how do they allow us to learn about them? The answer to this question depends on a number of details, in particular on the meaning of the probabilities involved. To classify the options, a minimalist conception of representation (Su\'arez 2004) is adopted: Modelers devise substitutes (``sources'') of their targets and investigate them to infer something about the target. Probabilistic models allow us to infer probabilities about the target from probabilities about the source. This leads to a framework in which we can systematically distinguish between different models of probabilistic modeling. I develop a fully Bayesian view of probabilistic modeling, but I argue that, as an alternative, Bayesian degrees of belief about the target may be derived from ontic probabilities about the source. Remarkably, some accounts of ontic probabilities can avoid problems if they are supposed to apply to sources only.
Resumo:
The exposed Glarus thrust displays midcrustal deformation with tens of kilometers of displacement on an ultrathin layer, the principal slip zone (PSZ). Geological observations indicate that this structure resulted from repeated stick-slip events in the presence of highly overpressured fluids. Here we show that the major characteristics of the Glarus thrust movement (localization, periodicity, and evidence of pressurized fluids) can be reconciled by the coupling of two processes, namely, shear heating and fluid release by carbonate decomposition. During this coupling, slow ductile creep deformation raises the temperature through shear heating and ultimately activates the chemical decomposition of carbonates. The subsequent release of highly overpressurized fluids forms and lubricates the PSZ, allowing a ductile fault to move tens of kilometers on millimeter-thick bands in episodic stick-slip events. This model identifies carbonate decomposition as a key process for motion on the Glarus thrust and explains the source of overpressured fluids accessing the PSZ.
Resumo:
Four different literature parameterizations for the formation and evolution of urban secondary organic aerosol (SOA) frequently used in 3-D models are evaluated using a 0-D box model representing the Los Angeles metropolitan region during the California Research at the Nexus of Air Quality and Climate Change (CalNex) 2010 campaign. We constrain the model predictions with measurements from several platforms and compare predictions with particle- and gas-phase observations from the CalNex Pasadena ground site. That site provides a unique opportunity to study aerosol formation close to anthropogenic emission sources with limited recirculation. The model SOA that formed only from the oxidation of VOCs (V-SOA) is insufficient to explain the observed SOA concentrations, even when using SOA parameterizations with multi-generation oxidation that produce much higher yields than have been observed in chamber experiments, or when increasing yields to their upper limit estimates accounting for recently reported losses of vapors to chamber walls. The Community Multiscale Air Quality (WRF-CMAQ) model (version 5.0.1) provides excellent predictions of secondary inorganic particle species but underestimates the observed SOA mass by a factor of 25 when an older VOC-only parameterization is used, which is consistent with many previous model–measurement comparisons for pre-2007 anthropogenic SOA modules in urban areas. Including SOA from primary semi-volatile and intermediate-volatility organic compounds (P-S/IVOCs) following the parameterizations of Robinson et al. (2007), Grieshop et al. (2009), or Pye and Seinfeld (2010) improves model–measurement agreement for mass concentration. The results from the three parameterizations show large differences (e.g., a factor of 3 in SOA mass) and are not well constrained, underscoring the current uncertainties in this area. Our results strongly suggest that other precursors besides VOCs, such as P-S/IVOCs, are needed to explain the observed SOA concentrations in Pasadena. All the recent parameterizations overpredict urban SOA formation at long photochemical ages (3 days) compared to observations from multiple sites, which can lead to problems in regional and especially global modeling. However, reducing IVOC emissions by one-half in the model to better match recent IVOC measurements improves SOA predictions at these long photochemical ages. Among the explicitly modeled VOCs, the precursor compounds that contribute the greatest SOA mass are methylbenzenes. Measured polycyclic aromatic hydrocarbons (naphthalenes) contribute 0.7% of the modeled SOA mass. The amounts of SOA mass from diesel vehicles, gasoline vehicles, and cooking emissions are estimated to be 16–27, 35–61, and 19–35 %, respectively, depending on the parameterization used, which is consistent with the observed fossil fraction of urban SOA, 71(+-3) %. The relative contribution of each source is uncertain by almost a factor of 2 depending on the parameterization used. In-basin biogenic VOCs are predicted to contribute only a few percent to SOA. A regional SOA background of approximately 2.1 μgm-3 is also present due to the long-distance transport of highly aged OA, likely with a substantial contribution from regional biogenic SOA. The percentage of SOA from diesel vehicle emissions is the same, within the estimated uncertainty, as reported in previous work that analyzed the weekly cycles in OA concentrations (Bahreini et al., 2012; Hayes et al., 2013). However, the modeling work presented here suggests a strong anthropogenic source of modern carbon in SOA, due to cooking emissions, which was not accounted for in those previous studies and which is higher on weekends. Lastly, this work adapts a simple two-parameter model to predict SOA concentration and O/C from urban emissions. This model successfully predicts SOA concentration, and the optimal parameter combination is very similar to that found for Mexico City. This approach provides a computationally inexpensive method for predicting urban SOA in global and climate models. We estimate pollution SOA to account for 26 Tg yr-1 of SOA globally, or 17% of global SOA, one third of which is likely to be non-fossil.