868 resultados para Part-time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reactive transport modelling was used to simulate solute transport, thermodynamic reactions, ion exchange and biodegradation in the Porewater Chemistry (PC) experiment at the Mont Terri Rock Laboratory. Simulations show that the most important chemical processes controlling the fluid composition within the borehole and the surrounding formation during the experiment are ion exchange, biodegradation and dissolution/precipitation reactions involving pyrite and carbonate minerals. In contrast, thermodynamic mineral dissolution/precipitation reactions involving alumo-silicate minerals have little impact on the fluid composition on the time-scale of the experiment. With the accurate description of the initial chemical condition in the formation in combination with kinetic formulations describing the different stages of bacterial activities, it has been possible to reproduce the evolution of important system parameters, such as the pH, redox potential, total organic C. dissolved inorganic C and SO(4) concentration. Leaching of glycerol from the pH-electrode may be the primary source of organic material that initiated bacterial growth, which caused the chemical perturbation in the borehole. Results from these simulations are consistent with data from the over-coring and demonstrate that the Opalinus Clay has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. This buffering capacity can be attributed to the carbonate system as well as to the reactivity of clay surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dentist enjoys a high degree of professional independence. He is seen as reliable and productive at work while carrying a big responsibility. His foremost social responsibility is to treat patients suffering from toothache and to promote oral health prevention for all people, regardless of their social status. At the same time, the dentist is prestigious, respected and honest. Comparable to other professions, however, dentistry is under public pressure. Media often associate the dental profession with negative properties such as sadism, immorality, or madness. Does the image of the dental profession suffer in this context? Our first article discusses the environmental factors which are identifiable to influence both each dentist and ultimately the whole image of dentistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the difference between the effects of a 5-day and a 1-day course of antibiotics on the incidence of postoperative infection after displaced fractures of the orbit. A total of 62 patients with orbital blow-out fractures were randomly assigned to two groups, both of which were given amoxicillin/clavulanic acid 1.2g intravenously every 8h from the time of admission to 24h postoperatively. The 5-day group were then given amoxicillin/clavulanic acid 625mg orally every 8h for 4 further days. The 1-day group were given placebo orally at the same time intervals. Follow up appointments were 1, 2, 4, 6, and 12 weeks, and 6 months, postoperatively. An infection in the orbital region was the primary end point. Sixty of the 62 patients completed the study. Two of the 29 patients in the 5-day group (6.8%) and 1/31 patients in the 1-day group (3.2%) developed local infections. In the 5-day group 1 patient developed diarrhoea. In the 1-day group 1 patient developed a rash on the trunk. There were no significant differences in the incidence of infection or side effects between the groups. We conclude that in displaced orbital fractures a postoperative 1-day course of antibiotics is as effective in preventing infective complications as a 5-day regimen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neural dynamic processes correlated over several time scales are found in vivo, in stimulus-evoked as well as spontaneous activity, and are thought to affect the way sensory stimulation is processed. Despite their potential computational consequences, a systematic description of the presence of multiple time scales in single cortical neurons is lacking. In this study, we injected fast spiking and pyramidal (PYR) neurons in vitro with long-lasting episodes of step-like and noisy, in-vivo-like current. Several processes shaped the time course of the instantaneous spike frequency, which could be reduced to a small number (1-4) of phenomenological mechanisms, either reducing (adapting) or increasing (facilitating) the neuron's firing rate over time. The different adaptation/facilitation processes cover a wide range of time scales, ranging from initial adaptation (<10 ms, PYR neurons only), to fast adaptation (<300 ms), early facilitation (0.5-1 s, PYR only), and slow (or late) adaptation (order of seconds). These processes are characterized by broad distributions of their magnitudes and time constants across cells, showing that multiple time scales are at play in cortical neurons, even in response to stationary stimuli and in the presence of input fluctuations. These processes might be part of a cascade of processes responsible for the power-law behavior of adaptation observed in several preparations, and may have far-reaching computational consequences that have been recently described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization and exploratory analysis is an important part of any data analysis and is made more challenging when the data are voluminous and high-dimensional. One such example is environmental monitoring data, which are often collected over time and at multiple locations, resulting in a geographically indexed multivariate time series. Financial data, although not necessarily containing a geographic component, present another source of high-volume multivariate time series data. We present the mvtsplot function which provides a method for visualizing multivariate time series data. We outline the basic design concepts and provide some examples of its usage by applying it to a database of ambient air pollution measurements in the United States and to a hypothetical portfolio of stocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial or full life-cycle tests are needed to assess the potential of endocrine-disrupting compounds (EDCs) to adversely affect development and reproduction of fish. Small fish species such as zebrafish, Danio rerio, are under consideration as model organisms for appropriate test protocols. The present study examines how reproductive effects resulting from exposure of zebrafish to the synthetic estrogen 17alpha-ethinylestradiol (EE2) vary with concentration (0.05 to 10 ng EE2 L(-1), nominal), and with timing/duration of exposure (partial life-cycle, full life-cycle, and two-generation exposure). Partial life-cycle exposure of the parental (F1) generation until completion of gonad differentiation (0-75 d postfertilization, dpf) impaired juvenile growth, time to sexual maturity, adult fecundity (egg production/female/day), and adult fertilization success at 1.1 ng EE2 L(-1) and higher. Lifelong exposure of the F1 generation until 177 dpf resulted in lowest observed effect concentrations (LOECs) for time to sexual maturity, fecundity, and fertilization success identical to those of the developmental test (0-75 dpf), but the slope of the concentration-response curve was steeper. Reproduction of zebrafish was completely inhibited at 9.3 ng EE2 L(-1), and this was essentially irreversible as a 3-mo depuration restored fertilization success to only a very low rate. Accordingly, elevated endogenous vitellogenin (VTG) synthesis and degenerative changes in gonad morphology persisted in depurated zebrafish. Full life-cycle exposure of the filial (F2) generation until 162 dpf impaired growth, delayed onset of spawning and reduced fecundity and fertilization success at 2.0 ng EE2 L(-1). In conclusion, results show that the impact of estrogenic agents on zebrafish sexual development and reproductive functions as well as the reversibility of effects, varies with exposure concentration (reversibility at < or = 1.1 ng EE2 L(-1) and irreversibility at 9.3 ng EE2 L(-1)), and between partial and full life-cycle exposure (exposure to 10 ng EE2 L(-1) during critical period exerted no permanent effect on sexual differentiation, but life-cycle exposure did).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The objectives of this systematic review were to assess the survival rate of implants placed in sites with transalveolar sinus floor elevation. MATERIAL AND METHODS: An electronic search was conducted to identify prospective and retrospective cohort studies on transalveolar sinus floor elevation, with a mean follow-up time of at least 1 year after functional loading. Failure and complication rates were analyzed using random-effects Poisson regression models to obtain summary estimates/ year proportions. RESULTS: The search provided 849 titles. Full-text analysis was performed for 176 articles, resulting in 19 studies that met the inclusion criteria. Meta-analysis of these studies indicated an estimated annual failure rate of 2.48% (95% confidence interval (95% CI): 1.37-4.49%) translating to an estimated survival rate of 92.8% (95% CI): 87.4-96.0%) for implants placed in transalveolarly augmented sinuses, after 3 years in function. Furthermore, subject-based analysis revealed an estimated annual failure of 3.71% (95% CI: 1.21-11.38%), translating to 10.5% (95% CI: 3.6-28.9%) of the subjects experiencing implant loss over 3 years. CONCLUSION: Survival rates of implants placed in transalveolar sinus floor augmentation sites are comparable to those in non-augmented sites. This technique is predictable with a low incidence of complications during and post-operatively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Endobronchial biopsies are an important tool for the study of airway remodeling in children. We aimed to evaluate the impact of performing endobronchial biopsies as a part of fiberoptic bronchoscopy on the length of the procedure. METHODS: Clinically indicated fiberoptic bronchoscopy at which endobronchial biopsy was attempted as a part of a research protocol was performed in 40 children (median age 6 years, range 2 months-16 years). Time needed for airway inspection, bronchoalveolar lavage (BAL) with three aliquots of 1 ml/kg of 0.9% saline, sampling of three macroscopically adequate biopsies, teaching, and other interventions (e.g., removal of plugs) was recorded. The bronchoscopist was not aware that the procedure was being timed. RESULTS: Median (range) duration (min) was 2.5 (1.0-8.2) for airway inspection, 2.8 (1.7-9.4) for BAL, 5.3 (2.5-16.6) for biopsy sampling, 2.4 (1.5-6.6) for teaching and 4.1 (0.8-18.5) for other interventions. Three adequate biopsies were obtained in 33 (83%) children. Use of 2.0 mm biopsy forceps (via 4.0 and 4.9 mm bronchoscopes) rather than 1.0 mm (via 2.8 and 3.6 mm bronchoscopes) significantly reduced biopsy time (4.6 min vs. 8.4 min, P < 0.001). CONCLUSIONS: It takes a median of just over 5 min to obtain three endobronchial biopsies in children, which we consider an acceptable increase in the duration of fiberoptic bronchoscopy for the purpose of research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Users of cochlear implant systems, that is, of auditory aids which stimulate the auditory nerve at the cochlea electrically, often complain about poor speech understanding in noisy environments. Despite the proven advantages of multimicrophone directional noise reduction systems for conventional hearing aids, only one major manufacturer has so far implemented such a system in a product, presumably because of the added power consumption and size. We present a physically small (intermicrophone distance 7 mm) and computationally inexpensive adaptive noise reduction system suitable for behind-the-ear cochlear implant speech processors. Supporting algorithms, which allow the adjustment of the opening angle and the maximum noise suppression, are proposed and evaluated. A portable real-time device for test in real acoustic environments is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the original ocean-bottom time-lapse seismic studies was performed at the Teal South oil field in the Gulf of Mexico during the late 1990’s. This work reexamines some aspects of previous work using modern analysis techniques to provide improved quantitative interpretations. Using three-dimensional volume visualization of legacy data and the two phases of post-production time-lapse data, I provide additional insight into the fluid migration pathways and the pressure communication between different reservoirs, separated by faults. This work supports a conclusion from previous studies that production from one reservoir caused regional pressure decline that in turn resulted in liberation of gas from multiple surrounding unproduced reservoirs. I also provide an explanation for unusual time-lapse changes in amplitude-versus-offset (AVO) data related to the compaction of the producing reservoir which, in turn, changed an isotropic medium to an anisotropic medium. In the first part of this work, I examine regional changes in seismic response due to the production of oil and gas from one reservoir. The previous studies primarily used two post-production ocean-bottom surveys (Phase I and Phase II), and not the legacy streamer data, due to the unavailability of legacy prestack data and very different acquisition parameters. In order to incorporate the legacy data in the present study, all three poststack data sets were cross-equalized and examined using instantaneous amplitude and energy volumes. This approach appears quite effective and helps to suppress changes unrelated to production while emphasizing those large-amplitude changes that are related to production in this noisy (by current standards) suite of data. I examine the multiple data sets first by using the instantaneous amplitude and energy attributes, and then also examine specific apparent time-lapse changes through direct comparisons of seismic traces. In so doing, I identify time-delays that, when corrected for, indicate water encroachment at the base of the producing reservoir. I also identify specific sites of leakage from various unproduced reservoirs, the result of regional pressure blowdown as explained in previous studies; those earlier studies, however, were unable to identify direct evidence of fluid movement. Of particular interest is the identification of one site where oil apparently leaked from one reservoir into a “new” reservoir that did not originally contain oil, but was ideally suited as a trap for fluids leaking from the neighboring spill-point. With continued pressure drop, oil in the new reservoir increased as more oil entered into the reservoir and expanded, liberating gas from solution. Because of the limited volume available for oil and gas in that temporary trap, oil and gas also escaped from it into the surrounding formation. I also note that some of the reservoirs demonstrate time-lapse changes only in the “gas cap” and not in the oil zone, even though gas must be coming out of solution everywhere in the reservoir. This is explained by interplay between pore-fluid modulus reduction by gas saturation decrease and dry-frame modulus increase by frame stiffening. In the second part of this work, I examine various rock-physics models in an attempt to quantitatively account for frame-stiffening that results from reduced pore-fluid pressure in the producing reservoir, searching for a model that would predict the unusual AVO features observed in the time-lapse prestack and stacked data at Teal South. While several rock-physics models are successful at predicting the time-lapse response for initial production, most fail to match the observations for continued production between Phase I and Phase II. Because the reservoir was initially overpressured and unconsolidated, reservoir compaction was likely significant, and is probably accomplished largely by uniaxial strain in the vertical direction; this implies that an anisotropic model may be required. Using Walton’s model for anisotropic unconsolidated sand, I successfully model the time-lapse changes for all phases of production. This observation may be of interest for application to other unconsolidated overpressured reservoirs under production.