933 resultados para Non-linear error correction models
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.
Resumo:
A quantum simulator of U(1) lattice gauge theories can be implemented with superconducting circuits. This allows the investigation of confined and deconfined phases in quantum link models, and of valence bond solid and spin liquid phases in quantum dimer models. Fractionalized confining strings and the real-time dynamics of quantum phase transitions are accessible as well. Here we show how state-of-the-art superconducting technology allows us to simulate these phenomena in relatively small circuit lattices. By exploiting the strong non-linear couplings between quantized excitations emerging when superconducting qubits are coupled, we show how to engineer gauge invariant Hamiltonians, including ring-exchange and four-body Ising interactions. We demonstrate that, despite decoherence and disorder effects, minimal circuit instances allow us to investigate properties such as the dynamics of electric flux strings, signaling confinement in gauge invariant field theories. The experimental realization of these models in larger superconducting circuits could address open questions beyond current computational capability.
Resumo:
We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.
Resumo:
Osteoporotic proximal femur fractures are caused by low energy trauma, typically when falling on the hip from standing height. Finite element simulations, widely used to predict the fracture load of femora in fall, usually include neither mass-related inertial effects, nor the viscous part of bone's material behavior. The aim of this study was to elucidate if quasi-static non-linear homogenized finite element analyses can predict in vitro mechanical properties of proximal femora assessed in dynamic drop tower experiments. The case-specific numerical models of thirteen femora predicted the strength (R2=0.84, SEE=540 N, 16.2%), stiffness (R2=0.82, SEE=233 N/mm, 18.0%) and fracture energy (R2=0.72, SEE=3.85 J, 39.6%); and provided fair qualitative matches with the fracture patterns. The influence of material anisotropy was negligible for all predictions. These results suggest that quasi-static homogenized finite element analysis may be used to predict mechanical properties of proximal femora in the dynamic sideways fall situation.
Resumo:
We examine the time-series relationship between housing prices in eight Southern California metropolitan statistical areas (MSAs). First, we perform cointegration tests of the housing price indexes for the MSAs, finding seven cointegrating vectors. Thus, the evidence suggests that one common trend links the housing prices in these eight MSAs, a purchasing power parity finding for the housing prices in Southern California. Second, we perform temporal Granger causality tests revealing intertwined temporal relationships. The Santa Anna MSA leads the pack in temporally causing housing prices in six of the other seven MSAs, excluding only the San Luis Obispo MSA. The Oxnard MSA experienced the largest number of temporal effects from other MSAs, six of the seven, excluding only Los Angeles. The Santa Barbara MSA proved the most isolated in that it temporally caused housing prices in only two other MSAs (Los Angels and Oxnard) and housing prices in the Santa Anna MSA temporally caused prices in Santa Barbara. Third, we calculate out-of-sample forecasts in each MSA, using various vector autoregressive (VAR) and vector error-correction (VEC) models, as well as Bayesian, spatial, and causality versions of these models with various priors. Different specifications provide superior forecasts in the different MSAs. Finally, we consider the ability of theses time-series models to provide accurate out-of-sample predictions of turning points in housing prices that occurred in 2006:Q4. Recursive forecasts, where the sample is updated each quarter, provide reasonably good forecasts of turning points.
Resumo:
We examine the time-series relationship between housing prices in Los Angeles, Las Vegas, and Phoenix. First, temporal Granger causality tests reveal that Los Angeles housing prices cause housing prices in Las Vegas (directly) and Phoenix (indirectly). In addition, Las Vegas housing prices cause housing prices in Phoenix. Los Angeles housing prices prove exogenous in a temporal sense and Phoenix housing prices do not cause prices in the other two markets. Second, we calculate out-of-sample forecasts in each market, using various vector autoregessive (VAR) and vector error-correction (VEC) models, as well as Bayesian, spatial, and causality versions of these models with various priors. Different specifications provide superior forecasts in the different cities. Finally, we consider the ability of theses time-series models to provide accurate out-of-sample predictions of turning points in housing prices that occurred in 2006:Q4. Recursive forecasts, where the sample is updated each quarter, provide reasonably good forecasts of turning points.
Resumo:
Next-generation sequencing (NGS) technology has become a prominent tool in biological and biomedical research. However, NGS data analysis, such as de novo assembly, mapping and variants detection is far from maturity, and the high sequencing error-rate is one of the major problems. . To minimize the impact of sequencing errors, we developed a highly robust and efficient method, MTM, to correct the errors in NGS reads. We demonstrated the effectiveness of MTM on both single-cell data with highly non-uniform coverage and normal data with uniformly high coverage, reflecting that MTM’s performance does not rely on the coverage of the sequencing reads. MTM was also compared with Hammer and Quake, the best methods for correcting non-uniform and uniform data respectively. For non-uniform data, MTM outperformed both Hammer and Quake. For uniform data, MTM showed better performance than Quake and comparable results to Hammer. By making better error correction with MTM, the quality of downstream analysis, such as mapping and SNP detection, was improved. SNP calling is a major application of NGS technologies. However, the existence of sequencing errors complicates this process, especially for the low coverage (
Resumo:
The Byrd Glacier discontinuity us a major boundary crossing the Ross Orogen, with crystalline rocks to the north and primarily sedimentary rocks to the south. Most models for the tectonic development of the Ross Orogen in the central Transantarctic Mountains consits of two-dimensional transects across the belt, but do not adress the major longitudinal contrast at Byrd Glacier. This paper presents a tectonic model centering on the Byrd Glacier discontinuity. Rifting in the Neoproterozoic producede a crustal promontory in the craton margin to the north of Byrd Glacier. Oblique convergence of the terrane (Beardmore microcontinent) during the latest Neroproterozoic and Early Cambrian was accompanied by subduction along the craton margin of East Antarctica. New data presented herein in the support of this hypothesis are U-Pb dates of 545.7 ± 6.8 Ma and 531.0 ± 7.5 Ma on plutonic rocks from the Britannia Range, subduction stepped out, and Byrd Glacier. After docking of the terrane, subduction stepped out, and Byrd Group was deposited during the Atdabanian-Botomian across the inner margin of the terrane. Beginning in the upper Botomian, reactivation of the sutured boundaries of the terrane resulted in an outpouring of clastic sediment and folding and faulting of the Byrd Group.
Resumo:
We present Plio-Pleistocene records of sediment color, %CaCO3, foraminifer fragmentation, benthic carbon isotopes (d13C) and radiogenic isotopes (Sr, Nd, Pb) of the terrigenous component from IODP Site U1313, a reoccupation of benchmark subtropical North Atlantic Ocean DSDP Site 607. We show that (inter)glacial cycles in sediment color and %CaCO3 pre-date major northern hemisphere glaciation and are unambiguously and consistently correlated to benthic oxygen isotopes back to 3.3 million years ago (Ma) and intermittently so probably back to the Miocene/Pliocene boundary. We show these lithological cycles to be driven by enhanced glacial fluxes of terrigenous material (eolian dust), not carbonate dissolution (the classic interpretation). Our radiogenic isotope data indicate a North American source for this dust (~3.3-2.4 Ma) in keeping with the interpreted source of terrestrial plant wax-derived biomarkers deposited at Site U1313. Yet our data indicate a mid latitude provenance regardless of (inter)glacial state, a finding that is inconsistent with the biomarker-inferred importance of glaciogenic mechanisms of dust production and transport. Moreover, we find that the relation between the biomarker and lithogenic components of dust accumulation is distinctly non-linear. Both records show a jump in glacial rates of accumulation from Marine Isotope Stage, MIS, G6 (2.72 Ma) onwards but the amplitude of this signal is about 3-8 times greater for biomarkers than for dust and particularly extreme during MIS 100 (2.52 Ma). We conclude that North America shifted abruptly to a distinctly more arid glacial regime from MIS G6, but major shifts in glacial North American vegetation biomes and regional wind fields (exacerbated by the growth of a large Laurentide Ice Sheet during MIS 100) likely explain amplification of this signal in the biomarker records. Our findings are consistent with wetter-than-modern reconstructions of North American continental climate under the warm high CO2 conditions of the Early Pliocene but contrast with most model predictions for the response of the hydrological cycle to anthropogenic warming over the coming 50 years (poleward expansion of the subtropical dry zones).
Resumo:
We conducted a six-week investigation of the sea ice inorganic carbon system during the winter-spring transition in the Canadian Arctic Archipelago. Samples for the determination of sea ice geochemistry were collected in conjunction with physical and biological parameters as part of the 2010 Arctic-ICE (Arctic - Ice-Covered Ecosystem in a Rapidly Changing Environment) program, a sea ice-based process study in Resolute Passage, Nunavut. The goal of Arctic-ICE was to determine the physical-biological processes controlling the timing of primary production in Arctic landfast sea ice and to better understand the influence of these processes on the drawdown and release of climatically active gases. The field study was conducted from 1 May to 21 June, 2010.
Resumo:
Inter-individual variation in diet within generalist animal populations is thought to be a widespread phenomenon but its potential causes are poorly known. Inter-individual variation can be amplified by the availability and use of allochthonous resources, i.e., resources coming from spatially distinct ecosystems. Using a wild population of arctic fox as a study model, we tested hypotheses that could explain variation in both population and individual isotopic niches, used here as proxy for the trophic niche. The arctic fox is an opportunistic forager, dwelling in terrestrial and marine environments characterized by strong spatial (arctic-nesting birds) and temporal (cyclic lemmings) fluctuations in resource abundance. First, we tested the hypothesis that generalist foraging habits, in association with temporal variation in prey accessibility, should induce temporal changes in isotopic niche width and diet. Second, we investigated whether within-population variation in the isotopic niche could be explained by individual characteristics (sex and breeding status) and environmental factors (spatiotemporal variation in prey availability). We addressed these questions using isotopic analysis and Bayesian mixing models in conjunction with linear mixed-effects models. We found that: i) arctic fox populations can simultaneously undergo short-term (i.e., within a few months) reduction in both isotopic niche width and inter-individual variability in isotopic ratios, ii) individual isotopic ratios were higher and more representative of a marine-based diet for non-breeding than breeding foxes early in spring, and iii) lemming population cycles did not appear to directly influence the diet of individual foxes after taking their breeding status into account. However, lemming abundance was correlated to proportion of breeding foxes, and could thus indirectly affect the diet at the population scale.
Resumo:
Anthropogenic CO2 emissions have exacerbated two environmental stressors, global climate warming and ocean acidification (OA), that have serious implications for marine ecosystems. Coral reefs are vulnerable to climate change yet few studies have explored the potential for interactive effects of warming temperature and OA on an important coral reef calcifier, crustose coralline algae (CCA). Coralline algae serve many important ecosystem functions on coral reefs and are one of the most sensitive organisms to ocean acidification. We investigated the effects of elevated pCO2 and temperature on calcification of Hydrolithon onkodes, an important species of reef-building coralline algae, and the subsequent effects on susceptibility to grazing by sea urchins. H. onkodes was exposed to a fully factorial combination of pCO2 (420, 530, 830 µatm) and temperature (26, 29 °C) treatments, and calcification was measured by the change in buoyant weight after 21 days of treatment exposure. Temperature and pCO2 had a significant interactive effect on net calcification of H. onkodes that was driven by the increased calcification response to moderately elevated pCO2. We demonstrate that the CCA calcification response was variable and non-linear, and that there was a trend for highest calcification at ambient temperature. H. onkodes then was exposed to grazing by the sea urchin Echinothrix diadema, and grazing was quantified by the change in CCA buoyant weight from grazing trials. E. diadema removed 60% more CaCO3 from H. onkodes grown at high temperature and high pCO2 than at ambient temperature and low pCO2. The increased susceptibility to grazing in the high pCO2 treatment is among the first evidence indicating the potential for cascading effects of OA and temperature on coral reef organisms and their ecological interactions.
Resumo:
Climate change threatens both the accretion and erosion processes that sustain coral reefs. Secondary calcification, bioerosion, and reef dissolution are integral to the structural complexity and long-term persistence of coral reefs, yet these processes have received less research attention than reef accretion by corals. In this study, we use climate scenarios from RCP 8.5 to examine the combined effects of rising ocean acidity and sea surface temperature (SST) on both secondary calcification and dissolution rates of a natural coral rubble community using a flow-through aquarium system. We found that secondary reef calcification and dissolution responded differently to the combined effect of pCO2 and temperature. Calcification had a non-linear response to the combined effect of pCO2 and temperature: the highest calcification rate occurred slightly above ambient conditions and the lowest calcification rate was in the highest temperature-pCO2 condition. In contrast, dissolution increased linearly with temperature-pCO2 . The rubble community switched from net calcification to net dissolution at +271 µatm pCO2 and 0.75 °C above ambient conditions, suggesting that rubble reefs may shift from net calcification to net dissolution before the end of the century. Our results indicate that (i) dissolution may be more sensitive to climate change than calcification and (ii) that calcification and dissolution have different functional responses to climate stressors; this highlights the need to study the effects of climate stressors on both calcification and dissolution to predict future changes in coral reefs.
Resumo:
The aim is to obtain computationally more powerful, neuro physiologically founded, artificial neurons and neural nets. Artificial Neural Nets (ANN) of the Perceptron type evolved from the original proposal by McCulloch an Pitts classical paper [1]. Essentially, they keep the computing structure of a linear machine followed by a non linear operation. The McCulloch-Pitts formal neuron (which was never considered by the author’s to be models of real neurons) consists of the simplest case of a linear computation of the inputs followed by a threshold. Networks of one layer cannot compute anylogical function of the inputs, but only those which are linearly separable. Thus, the simple exclusive OR (contrast detector) function of two inputs requires two layers of formal neurons