57 resultados para Models and modeling
Resumo:
OBJECTIVE There is increasing evidence that epileptic activity involves widespread brain networks rather than single sources and that these networks contribute to interictal brain dysfunction. We investigated the fast-varying behavior of epileptic networks during interictal spikes in right and left temporal lobe epilepsy (RTLE and LTLE) at a whole-brain scale using directed connectivity. METHODS In 16 patients, 8 with LTLE and 8 with RTLE, we estimated the electrical source activity in 82 cortical regions of interest (ROIs) using high-density electroencephalography (EEG), individual head models, and a distributed linear inverse solution. A multivariate, time-varying, and frequency-resolved Granger-causal modeling (weighted Partial Directed Coherence) was applied to the source signal of all ROIs. A nonparametric statistical test assessed differences between spike and baseline epochs. Connectivity results between RTLE and LTLE were compared between RTLE and LTLE and with neuropsychological impairments. RESULTS Ipsilateral anterior temporal structures were identified as key drivers for both groups, concordant with the epileptogenic zone estimated invasively. We observed an increase in outflow from the key driver already before the spike. There were also important temporal and extratemporal ipsilateral drivers in both conditions, and contralateral only in RTLE. A different network pattern between LTLE and RTLE was found: in RTLE there was a much more prominent ipsilateral to contralateral pattern than in LTLE. Half of the RTLE patients but none of the LTLE patients had neuropsychological deficits consistent with contralateral temporal lobe dysfunction, suggesting a relationship between connectivity changes and cognitive deficits. SIGNIFICANCE The different patterns of time-varying connectivity in LTLE and RTLE suggest that they are not symmetrical entities, in line with our neuropsychological results. The highest outflow region was concordant with invasive validation of the epileptogenic zone. This enhanced characterization of dynamic connectivity patterns could better explain cognitive deficits and help the management of epilepsy surgery candidates.
Resumo:
Potential desiccation polygons (PDPs) are polygonal surface patterns that are a common feature in Noachian-to-Hesperian-aged phyllosilicate- and chloride-bearing terrains and have been observed with size scales that range from cm-wide (by current rovers) to 10s of meters-wide. The global distribution of PDPs shows that they share certain traits in terms of morphology and geologic setting that can aid identification and distinction from fracturing patterns caused by other processes. They are mostly associated with sedimentary deposits that display spectral evidence for the presence of Fe/Mg smectites, Al-rich smectites or less commonly kaolinites, carbonates, and sulfates. In addition, PDPs may indicate paleolacustrine environments, which are of high interest for planetary exploration, and their presence implies that the fractured units are rich in smectite minerals that may have been deposited in a standing body of water. A collective synthesis with new data, particularly from the HiRISE camera suggests that desiccation cracks may be more common on the surface of Mars than previously thought. A review of terrestrial research on desiccation processes with emphasis on the theoretical background, field studies, and modeling constraints is presented here as well and shown to be consistent with and relevant to certain polygonal patterns on Mars.
Resumo:
Efforts are ongoing to decrease the noise of the GRACE gravity field models and hence to arrive closer to the GRACE baseline. The most significant error sources belong the untreated errors in the observation data and the imperfections in the background models. The recent study (Bandikova&Flury,2014) revealed that the current release of the star camera attitude data (SCA1B RL02) contain noise systematically higher than expected by about a factor 3-4. This is due to an incorrect implementation of the algorithms for quaternion combination in the JPL processing routines. Generating improved SCA data requires that valid data from both star camera heads are available which is not always the case because the Sun and Moon at times blind one camera. In the gravity field modeling, the attitude data are needed for the KBR antenna offset correction and to orient the non-gravitational linear accelerations sensed by the accelerometer. Hence any improvement in the SCA data is expected to be reflected in the gravity field models. In order to quantify the effect on the gravity field, we processed one month of observation data using two different approaches: the celestial mechanics approach (AIUB) and the variational equations approach (ITSG). We show that the noise in the KBR observations and the linear accelerations has effectively decreased. However, the effect on the gravity field on a global scale is hardly evident. We conclude that, at the current level of accuracy, the errors seen in the temporal gravity fields are dominated by errors coming from sources other than the attitude data.
Resumo:
Argillaceous rocks are considered to be a suitable geological barrier for the long-term containment of wastes. Their efficiency at retarding contaminant migration is assessed using reactive-transport experiments and modeling, the latter requiring a sound understanding of pore-water chemistry. The building of a pore-water model, which is mandatory for laboratory experiments mimicking in situ conditions, requires a detailed knowledge of the rock mineralogy and of minerals at equilibrium with present-day pore waters. Using a combination of petrological, mineralogical, and isotopic studies, the present study focused on the reduced Opalinus Clay formation (Fm) of the Benken borehole (30 km north of Zurich) which is intended for nuclear-waste disposal in Switzerland. A diagenetic sequence is proposed, which serves as a basis for determining the minerals stable in the formation and their textural relationships. Early cementation of dominant calcite, rare dolomite, and pyrite formed by bacterial sulfate reduction, was followed by formation of iron-rich calcite, ankerite, siderite, glauconite, (Ba, Sr) sulfates, and traces of sphalerite and galena. The distribution and abundance of siderite depends heavily on the depositional environment (and consequently on the water column). Benken sediment deposition during Aalenian times corresponds to an offshore environment with the early formation of siderite concretions at the water/sediment interface at the fluctuating boundary between the suboxic iron reduction and the sulfate reduction zones. Diagenetic minerals (carbonates except dolomite, sulfates, silicates) remained stable from their formation to the present. Based on these mineralogical and geochemical data, the mineral assemblage previously used for the geochemical model of the pore waters at Mont Terri may be applied to Benken without significant changes. These further investigations demonstrate the need for detailed mineralogical and geochemical study to refine the model of pore-water chemistry in a clay formation.
Resumo:
In this article, the realization of a global terrestrial reference system (TRS) based on a consistent combination of Global Navigation Satellite System (GNSS) and Satellite Laser Ranging (SLR) is studied. Our input data consists of normal equation systems from 17 years (1994– 2010) of homogeneously reprocessed GPS, GLONASS and SLR data. This effort used common state of the art reduction models and the same processing software (Bernese GNSS Software) to ensure the highest consistency when combining GNSS and SLR. Residual surface load deformations are modeled with a spherical harmonic approach. The estimated degree-1 surface load coefficients have a strong annual signal for which the GNSS- and SLR-only solutions show very similar results. A combination including these coefficients reduces systematic uncertainties in comparison to the singletechnique solution. In particular, uncertainties due to solar radiation pressure modeling in the coefficient time series can be reduced up to 50 % in the GNSS+SLR solution compared to the GNSS-only solution. In contrast to the ITRF2008 realization, no local ties are used to combine the different geodetic techniques.We combine the pole coordinates as global ties and apply minimum constraints to define the geodetic datum. We show that a common origin, scale and orientation can be reliably realized from our combination strategy in comparison to the ITRF2008.
Resumo:
A detailed characterization of air quality in the megacity of Paris (France) during two 1-month intensive campaigns and from additional 1-year observations revealed that about 70% of the urban background fine particulate matter (PM) is transported on average into the megacity from upwind regions. This dominant influence of regional sources was confirmed by in situ measurements during short intensive and longer-term campaigns, aerosol optical depth (AOD) measurements from ENVISAT, and modeling results from PMCAMx and CHIMERE chemistry transport models. While advection of sulfate is well documented for other megacities, there was surprisingly high contribution from long-range transport for both nitrate and organic aerosol. The origin of organic PM was investigated by comprehensive analysis of aerosol mass spectrometer (AMS), radiocarbon and tracer measurements during two intensive campaigns. Primary fossil fuel combustion emissions constituted less than 20%in winter and 40%in summer of carbonaceous fine PM, unexpectedly small for a megacity. Cooking activities and, during winter, residential wood burning are the major primary organic PM sources. This analysis suggests that the major part of secondary organic aerosol is of modern origin, i.e., from biogenic precursors and from wood burning. Black carbon concentrations are on the lower end of values encountered in megacities worldwide, but still represent an issue for air quality. These comparatively low air pollution levels are due to a combination of low emissions per inhabitant, flat terrain, and a meteorology that is in general not conducive to local pollution build-up. This revised picture of a megacity only being partially responsible for its own average and peak PM levels has important implications for air pollution regulation policies.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
Context. The Rosetta encounter with comet 67P/Churyumov-Gerasimenko provides a unique opportunity for an in situ, up-close investigation of ion-neutral chemistry in the coma of a weakly outgassing comet far from the Sun. Aims. Observations of primary and secondary ions and modeling are used to investigate the role of ion-neutral chemistry within the thin coma. Methods. Observations from late October through mid-December 2014 show the continuous presence of the solar wind 30 km from the comet nucleus. These and other observations indicate that there is no contact surface and the solar wind has direct access to the nucleus. On several occasions during this time period, the Rosetta/ROSINA/Double Focusing Mass Spectrometer measured the low-energy ion composition in the coma. Organic volatiles and water group ions and their breakup products (masses 14 through 19), CO2+ (masses 28 and 44) and other mass peaks (at masses 26, 27, and possibly 30) were observed. Secondary ions include H3O+ and HCO+ (masses 19 and 29). These secondary ions indicate ion-neutral chemistry in the thin coma of the comet. A relatively simple model is constructed to account for the low H3O+/H2O+ and HCO+/CO+ ratios observed in a water dominated coma. Results from this simple model are compared with results from models that include a more detailed chemical reaction network. Results. At low outgassing rates, predictions from the simple model agree with observations and with results from more complex models that include much more chemistry. At higher outgassing rates, the ion-neutral chemistry is still limited and high HCO+/CO+ ratios are predicted and observed. However, at higher outgassing rates, the model predicts high H3O+/H2O+ ratios and the observed ratios are often low. These low ratios may be the result of the highly heterogeneous nature of the coma, where CO and CO2 number densities can exceed that of water.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
The ultimate goals of periodontal therapy remain the complete regeneration of those periodontal tissues lost to the destructive inflammatory-immune response, or to trauma, with tissues that possess the same structure and function, and the re-establishment of a sustainable health-promoting biofilm from one characterized by dysbiosis. This volume of Periodontology 2000 discusses the multiple facets of a transition from therapeutic empiricism during the late 1960s, toward regenerative therapies, which is founded on a clearer understanding of the biophysiology of normal structure and function. This introductory article provides an overview on the requirements of appropriate in vitro laboratory models (e.g. cell culture), of preclinical (i.e. animal) models and of human studies for periodontal wound and bone repair. Laboratory studies may provide valuable fundamental insights into basic mechanisms involved in wound repair and regeneration but also suffer from a unidimensional and simplistic approach that does not account for the complexities of the in vivo situation, in which multiple cell types and interactions all contribute to definitive outcomes. Therefore, such laboratory studies require validatory research, employing preclinical models specifically designed to demonstrate proof-of-concept efficacy, preliminary safety and adaptation to human disease scenarios. Small animal models provide the most economic and logistically feasible preliminary approaches but the outcomes do not necessarily translate to larger animal or human models. The advantages and limitations of all periodontal-regeneration models need to be carefully considered when planning investigations to ensure that the optimal design is adopted to answer the specific research question posed. Future challenges lie in the areas of stem cell research, scaffold designs, cell delivery and choice of growth factors, along with research to ensure appropriate gingival coverage in order to prevent gingival recession during the healing phase.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.
Resumo:
We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing, and pop-up width from model to nature.