946 resultados para Time domain simulation tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sound knowledge of the spatial and temporal patterns of rockfalls is fundamental for the management of this very common hazard in mountain environments. Process-based, three-dimensional simulation models are nowadays capable of reproducing the spatial distribution of rockfall occurrences with reasonable accuracy through the simulation of numerous individual trajectories on highly-resolved digital terrain models. At the same time, however, simulation models typically fail to quantify the ‘real’ frequency of rockfalls (in terms of return intervals). The analysis of impact scars on trees, in contrast, yields real rockfall frequencies, but trees may not be present at the location of interest and rare trajectories may not necessarily be captured due to the limited age of forest stands. In this article, we demonstrate that the coupling of modeling with tree-ring techniques may overcome the limitations inherent to both approaches. Based on the analysis of 64 cells (40 m × 40 m) of a rockfall slope located above a 1631-m long road section in the Swiss Alps, we illustrate results from 488 rockfalls detected in 1260 trees. We illustrate that tree impact data cannot only be used (i) to reconstruct the real frequency of rockfalls for individual cells, but that they also serve (ii) the calibration of the rockfall model Rockyfor3D, as well as (iii) the transformation of simulated trajectories into real frequencies. Calibrated simulation results are in good agreement with real rockfall frequencies and exhibit significant differences in rockfall activity between the cells (zones) along the road section. Real frequencies, expressed as rock passages per meter road section, also enable quantification and direct comparison of the hazard potential between the zones. The contribution provides an approach for hazard zoning procedures that complements traditional methods with a quantification of rockfall frequencies in terms of return intervals through a systematic inclusion of impact records in trees.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Several parameters of heart rate variability (HRV) have been shown to predict the risk of sudden cardiac death (SCD) in cardiac patients. There is consensus that risk prediction is increased when measuring HRV during specific provocations such as orthostatic challenge. For the first time, we provide data on reproducibility of such a test in patients with a history of acute coronary syndrome. METHODS: Sixty male patients (65+/-8years) with a history of acute coronary syndrome on stable medication were included. HRV was measured in supine (5min) and standing (5min) position on 2 occasions separated by two weeks. For risk assessment relevant time-domain [standard deviation of all R-R intervals (SDNN) and root mean squared standard differences between adjacent R-R intervals (RMSSD)], frequency domain [low-frequency power (LF), high-frequency power (HF) and LF/HF power ratio] and short-term fractal scaling component (DF1) were computed. Absolute reproducibility was assessed with the standard errors of the mean (SEM) and 95% limits of random variation, and relative reproducibility by the intraclass correlation coefficient (ICC). RESULTS: We found comparable SEMs and ICCs in supine position and after an orthostatic challenge test. All ICCs were good to excellent (ICCs between 0.636 and 0.869). CONCLUSIONS: Reproducibility of HRV parameters during orthostatic challenge is good and comparable with supine position.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MRSI grids frequently show spectra with poor quality, mainly because of the high sensitivity of MRS to field inhomogeneities. These poor quality spectra are prone to quantification and/or interpretation errors that can have a significant impact on the clinical use of spectroscopic data. Therefore, quality control of the spectra should always precede their clinical use. When performed manually, quality assessment of MRSI spectra is not only a tedious and time-consuming task, but is also affected by human subjectivity. Consequently, automatic, fast and reliable methods for spectral quality assessment are of utmost interest. In this article, we present a new random forest-based method for automatic quality assessment of (1) H MRSI brain spectra, which uses a new set of MRS signal features. The random forest classifier was trained on spectra from 40 MRSI grids that were classified as acceptable or non-acceptable by two expert spectroscopists. To account for the effects of intra-rater reliability, each spectrum was rated for quality three times by each rater. The automatic method classified these spectra with an area under the curve (AUC) of 0.976. Furthermore, in the subset of spectra containing only the cases that were classified every time in the same way by the spectroscopists, an AUC of 0.998 was obtained. Feature importance for the classification was also evaluated. Frequency domain skewness and kurtosis, as well as time domain signal-to-noise ratios (SNRs) in the ranges 50-75 ms and 75-100 ms, were the most important features. Given that the method is able to assess a whole MRSI grid faster than a spectroscopist (approximately 3 s versus approximately 3 min), and without loss of accuracy (agreement between classifier trained with just one session and any of the other labelling sessions, 89.88%; agreement between any two labelling sessions, 89.03%), the authors suggest its implementation in the clinical routine. The method presented in this article was implemented in jMRUI's SpectrIm plugin. Copyright © 2016 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a reliable simulation of the time and space dependent CO2 redistribution between ocean and atmosphere an appropriate time dependent simulation of particle dynamics processes is essential but has not been carried out so far. The major difficulties were the lack of suitable modules for particle dynamics and early diagenesis (in order to close the carbon and nutrient budget) in ocean general circulation models, and the lack of an understanding of biogeochemical processes, such as the partial dissolution of calcareous particles in oversaturated water. The main target of ORFOIS was to fill in this gap in our knowledge and prediction capability infrastructure. This goal has been achieved step by step. At first comprehensive data bases (already existing data) of observations of relevance for the three major types of biogenic particles, organic carbon (POC), calcium carbonate (CaCO3), and biogenic silica (BSi or opal), as well as for refractory particles of terrestrial origin were collated and made publicly available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detection of climate response to orbital forcing during Cenozoic long-term global cooling is a key to understanding the behavior of Earth's icehouse climate. Sedimentary rhythm, which is a rhythmic or cyclic variation in the sequence of sediments and sedimentary rocks, is useful for quantitative reconstruction of Earth's evolution during geological time. In this study, we attempt to (1) identify sources of natural gamma ray (NGR) emissions of core recovered during Ocean Drilling Program (ODP) Leg 186 by analyses of physical properties, major element concentrations, diatom abundances, and total organic carbon contents, (2) integrate whole-core NGR intensity of recovered core with wireline logging NGR measurements in order to construct a continuous sedimentary sequence, and (3) discuss changes in the NGR signal in the time domain. This attempt gives us preliminary information to discuss climate stability in relation to orbital forcing thorough geologic time. NGR values are obtained mainly by indirectly measuring the amount of terrigenous minerals including potassium and related elements in the sediments. NGR intensity is also affected by high porosity, which in these sediments was related to the amount of diatom valves. NGR signals might be a proxy of the intensity of the East Asian monsoon off Sanriku. A continuous sedimentary record was constructed by integration of the whole-core NGR intensity measured in sediments obtained from the drilled holes with that measured directly in the borehole by wireline logging, then using a stratigraphic age model to convert to a time series covering 1.3-9.7 Ma with a short break at ~5 Ma. High sedimentation rate (H) stages were identified in the sequence, related to intervals of low-amplitude precession and eccentricity variations. The transition of the dominant periodicities through the four H stages may correlate to major shifts in the climate system, including the onset of major Northern Hemisphere glaciation, the initial stage of the East Asian monsoon intensification, and the onset of the East Asian monsoon with uplift of the Himalayas and the Tibetan Plateau.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate during the Cenozoic era changed in several steps from ice-free poles and warm conditions to ice-covered poles and cold conditions. Since the 1950s, a body of information on ice volume and temperature changes has been built up predominantly on the basis of measurements of the oxygen isotopic composition of shells of benthic foraminifera collected from marine sediment cores. The statistical methodology of time series analysis has also evolved, allowing more information to be extracted from these records. Here we provide a comprehensive view of Cenozoic climate evolution by means of a coherent and systematic application of time series analytical tools to each record from a compilation spanning the interval from 4 to 61 Myr ago. We quantitatively describe several prominent features of the oxygen isotope record, taking into account the various sources of uncertainty (including measurement, proxy noise, and dating errors). The estimated transition times and amplitudes allow us to assess causal climatological-tectonic influences on the following known features of the Cenozoic oxygen isotopic record: Paleocene-Eocene Thermal Maximum, Eocene-Oligocene Transition, Oligocene-Miocene Boundary, and the Middle Miocene Climate Optimum. We further describe and causally interpret the following features: Paleocene-Eocene warming trend, the two-step, long-term Eocene cooling, and the changes within the most recent interval (Miocene-Pliocene). We review the scope and methods of constructing Cenozoic stacks of benthic oxygen isotope records and present two new latitudinal stacks, which capture besides global ice volume also bottom water temperatures at low (less than 30°) and high latitudes. This review concludes with an identification of future directions for data collection, statistical method development, and climate modeling.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows that today’s modelling of electrical noise as coming from noisy resistances is a non sense one contradicting their nature as systems bearing an electrical noise. We present a new model for electrical noise that including Johnson and Nyquist work also agrees with the Quantum Mechanical description of noisy systems done by Callen and Welton, where electrical energy fluctuates and is dissipated with time. By the two currents the Admittance function links in frequency domain with their common voltage, this new model shows the connection Cause-Effect that exists between Fluctuation and Dissipation of energy in time domain. In spite of its radical departure from today’s belief on electrical noise in resistors, this Complex model for electrical noise is obtained from Nyquist result by basic concepts of Circuit Theory and Thermo- dynamics that also apply to capacitors and inductors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Application of the spectrum analyzer for illustrating several concepts associated with mobile communications is discussed. Specifically, two groups of observable features are described. First, time variation and frequency selectivity of multipath propagation can be revealed by carrying out simple measurements on commercial-network GSM and UMTS signals. Second, the main time-domain and frequency-domain features of GSM and UMTS radio signals can be observed. This constitutes a valuable tool for teaching mobile communication courses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have recently demonstrated a biosensor based on a lattice of SU8 pillars on a 1 μm SiO2/Si wafer by measuring vertically reflectivity as a function of wavelength. The biodetection has been proven with the combination of Bovine Serum Albumin (BSA) protein and its antibody (antiBSA). A BSA layer is attached to the pillars; the biorecognition of antiBSA involves a shift in the reflectivity curve, related with the concentration of antiBSA. A detection limit in the order of 2 ng/ml is achieved for a rhombic lattice of pillars with a lattice parameter (a) of 800 nm, a height (h) of 420 nm and a diameter(d) of 200 nm. These results correlate with calculations using 3D-finite difference time domain method. A 2D simplified model is proposed, consisting of a multilayer model where the pillars are turned into a 420 nm layer with an effective refractive index obtained by using Beam Propagation Method (BPM) algorithm. Results provided by this model are in good correlation with experimental data, reaching a reduction in time from one day to 15 minutes, giving a fast but accurate tool to optimize the design and maximizing sensitivity, and allows analyzing the influence of different variables (diameter, height and lattice parameter). Sensitivity is obtained for a variety of configurations, reaching a limit of detection under 1 ng/ml. Optimum design is not only chosen because of its sensitivity but also its feasibility, both from fabrication (limited by aspect ratio and proximity of the pillars) and fluidic point of view. (© 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current nanometer technologies suffer within-die parameter uncertainties, varying workload conditions, aging, and temperature effects that cause a serious reduction on yield and performance. In this scenario, monitoring, calibration, and dynamic adaptation become essential, demanding systems with a collection of multi purpose monitors and exposing the need for light-weight monitoring networks. This paper presents a new monitoring network paradigm able to perform an early prioritization of the information. This is achieved by the introduction of a new hierarchy level, the threshing level. Targeting it, we propose a time-domain signaling scheme over a single-wire that minimizes the network switching activity as well as the routing requirements. To validate our approach, we make a thorough analysis of the architectural trade-offs and expose two complete monitoring systems that suppose an area improvement of 40% and a power reduction of three orders of magnitude compared to previous works.