979 resultados para Time-shift estimation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background

Grass pollen allergens are the most important cause of hay fever and allergic asthma during summer in cool temperate climates. Pollen counts provide a guide to hay fever sufferers. However, grass pollen, because of its size, has a low probability of entering the lower airways to trigger asthma. Yet, grass pollen allergens are known to be associated with atmospheric respirable particles.
Objective

We aimed (1) to determine the concentration of group 5 major allergens in (a) pollen grains of clinically important grass species and (b) atmospheric particles (respirable and nonrespirable) and (2) to compare the atmospheric allergen load with clinical data to assess different risk factors for asthma and hay fever.
Methods

We have performed a continuous 24 h sampling of atmospheric particles greater and lower than 7.2 μm in diameter during the grass pollen season of 1996 and 1997 (17 October 1996–16 January 1997) by means of a high volume cascade impactor at a height of about 15 m above ground in Melbourne. Using Western analysis, we assessed the reactivity of major timothy grass allergen Phl p 5 specific monoclonal antibody (MoAb) against selected pollen extracts. A MoAb-based ELISA was then employed to quantify Phl p 5 and cross-reactive allergens in pollen extracts and atmospheric particles larger and smaller than 7.2 μm.
Results

Phl p 5-specific MoAb detected group 5 allergens in tested grass pollen extracts, indicating that the ELISA employed here determines total group 5 allergen concentrations. On average, 0.05 ng of group 5 allergens were detectable per grass pollen grain. Atmospheric group 5 allergen concentrations in particles > 7.2 μm were significantly correlated with grass pollen counts (rs = 0.842, P < 0.001). On dry days, 37% of the total group 5 allergen load, whereas upon rainfall, 57% of the total load was detected in respirable particles. After rainfall, the number of starch granule equivalents increased up to 10-fold; starch granule equivalent is defined as a hypothetical potential number of airborne starch granules based on known pollen count data. This indicates that rainfall tended to wash out large particles and contributed to an increase in respirable particles containing group 5 allergens by bursting of pollen grains. Four day running means of group 5 allergens in respirable particles and of asthma attendances (delayed by 2 days) were shown to be significantly correlated (P < 0.001).
Conclusion

Here we present, for the first time, an estimation of the total group 5 allergen content in respirable and nonrespirable particles in the atmosphere of Melbourne. These results highlight the different environmental risk factors for hay fever and allergic asthma in patients, as on days of rainfall following high grass pollen count, the risk for asthma sufferers is far greater than on days of high pollen count with no associated rainfall. Moreover, rainfall may also contribute to the release of allergens from fungal spores and, along with the release of free allergen molecules from pollen grains, may be able to interact with other particles such as pollutants (i.e. diesel exhaust carbon particles) to trigger allergic asthma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the integration of missing observation data with hidden Markov models to create a framework that is able to segment and classify individual actions from a stream of human motion using an incomplete 3D human pose estimation. Based on this framework, a model is trained to automatically segment and classify an activity sequence into its constituent subactions during inferencing. This is achieved by introducing action labels into the observation vector and setting these labels as missing data during inferencing, thus forcing the system to infer the probability of each action label. Additionally, missing data provides recognition-level support for occlusions and imperfect silhouette segmentation, permitting the use of a fast (real-time) pose estimation that delegates the burden of handling undetected limbs onto the action recognition system. Findings show that the use of missing data to segment activities is an accurate and elegant approach. Furthermore, action recognition can be accurate even when almost half of the pose feature data is missing due to occlusions, since not all of the pose data is important all of the time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Planned burning is a preventative strategy aimed at decreasing fuel loads to reduce the severity of future wildfire events. During planned burn operations, firefighters can work long shifts. Furthermore, remote burning locations may require firefighters to sleep away from home between shifts. The existing evidence surrounding firefighters' sleep during such operations is exclusively anecdotal. The aims of the study were to describe firefighters' sleep during planned burn operations and evaluate the impact of the key operational factors (shift start time, shift length and sleeping location) that may contribute to inadequate sleep. Thirty-three salaried firefighters were recruited from Australia's fire agencies and sleep was measured objectively using wrist actigraphy for four weeks. All variables were examined in two conditions: (1) burn days, and (2) non-burn days. Time in bed, total sleep time, sleep latency and sleep efficiency were evaluated objectively. Subjective reports of pre- and post-sleep fatigue, sleep location, sleep quality, sleep quantity, number of times woken and sleep timing were also recorded. Analyses revealed no differences in measures of sleep quantity and quality when comparing non-burn and burn days. Total sleep time was less when planned burn shifts were >12 h. However, on burn days, work shift start time as well as sleeping location did not impact firefighters' sleep quantity. Self-reported levels of pre- and post-sleep fatigue were greater on burn days compared to non-burn days. These findings indicate that sleep quantity and quality are not compromised during planned burn operations <12 h in duration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this Thesis, the development of the dynamic model of multirotor unmanned aerial vehicle with vertical takeoff and landing characteristics, considering input nonlinearities and a full state robust backstepping controller are presented. The dynamic model is expressed using the Newton-Euler laws, aiming to obtain a better mathematical representation of the mechanical system for system analysis and control design, not only when it is hovering, but also when it is taking-off, or landing, or flying to perform a task. The input nonlinearities are the deadzone and saturation, where the gravitational effect and the inherent physical constrains of the rotors are related and addressed. The experimental multirotor aerial vehicle is equipped with an inertial measurement unit and a sonar sensor, which appropriately provides measurements of attitude and altitude. A real-time attitude estimation scheme based on the extended Kalman filter using quaternions was developed. Then, for robustness analysis, sensors were modeled as the ideal value with addition of an unknown bias and unknown white noise. The bounded robust attitude/altitude controller were derived based on globally uniformly practically asymptotically stable for real systems, that remains globally uniformly asymptotically stable if and only if their solutions are globally uniformly bounded, dealing with convergence and stability into a ball of the state space with non-null radius, under some assumptions. The Lyapunov analysis technique was used to prove the stability of the closed-loop system, compute bounds on control gains and guaranteeing desired bounds on attitude dynamics tracking errors in the presence of measurement disturbances. The controller laws were tested in numerical simulations and in an experimental hexarotor, developed at the UFRN Robotics Laboratory

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new methodology for the adjustment of fuzzy inference systems. A novel approach, which uses unconstrained optimization techniques, is developed in order to adjust the free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through an estimation of time series. More specifically, the Mackey-Glass chaotic time series estimation is used for the validation of the proposed methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Da ormai sette anni la stazione permanente GPS di Baia Terranova acquisisce dati giornalieri che opportunamente elaborati consentono di contribuire alla comprensione della dinamica antartica e a verificare se modelli globali di natura geofisica siano aderenti all’area di interesse della stazione GPS permanente. Da ricerche bibliografiche condotte si è dedotto che una serie GPS presenta molteplici possibili perturbazioni principalmente dovute a errori nella modellizzazione di alcuni dati ancillari necessari al processamento. Non solo, da alcune analisi svolte, è emerso come tali serie temporali ricavate da rilievi geodetici, siano afflitte da differenti tipologie di rumore che possono alterare, se non opportunamente considerate, i parametri di interesse per le interpretazioni geofisiche del dato. Il lavoro di tesi consiste nel comprendere in che misura tali errori, possano incidere sui parametri dinamici che caratterizzano il moto della stazione permanente, facendo particolare riferimento alla velocità del punto sul quale la stazione è installata e sugli eventuali segnali periodici che possono essere individuati.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the effect of the Great Moderation on the relationship between U.S. output growth and its volatility over the period 1947 to 2006. First, we consider the possible effects of structural change in the volatility process. In so doing, we employ GARCH-M and ARCH-M specifications of the process describing output growth rate and its volatility with and without a one-time structural break in volatility. Second, our data analyses and empirical results suggest no significant relationship between the output growth rate and its volatility, favoring the traditional wisdom of dichotomy in macroeconomics. Moreover, the evidence shows that the time-varying variance falls sharply or even disappears once we incorporate a one-time structural break in the unconditional variance of output starting 1982 or 1984. That is, the integrated GARCH effect proves spurious. Finally, a joint test of a trend change and a one-time shift in the volatility process finds that the one-time shift dominates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compare the ocean temperature evolution of the Holocene as simulated by climate models and reconstructed from marine temperature proxies. This site provides informations about the Holocene temperature trends as simulated by the models. We use transient simulations from a coupled atmosphere-ocean general circulation model, as well as an ensemble of time slice simulations from the Paleoclimate Modelling Intercomparison Project. The general pattern of sea surface temperature (SST) in the models shows a high latitude cooling and a low latitude warming. The proxy dataset comprises a global compilation of marine alkenone- and Mg/Ca-derived SST estimates. Independently of the choice of the climate model, we observe significant mismatches between modelled and estimated SST amplitudes in the trends for the last 6000 years. Alkenone-based SST records show a similar pattern as the simulated annual mean SSTs, but the simulated SST trends underestimate the alkenone-based SST trends by a factor of two to five. For Mg/Ca, no significant relationship between model simulations and proxy reconstructions can be detected. We tested if such discrepancies can be caused by too simplistic interpretations of the proxy data. We tested different seasons and depths in the model to compare the proxy data trends, and can reconcile only part of the mismatches on a regional scale. We therefore considered the additional environmental factor changes in the planktonic organisms' habitat depth and a time-shift in the recording season to diagnose whether invoking those environmental factors can help reconciling the proxy records and the model simulations. We find that invoking shifts in the living season and habitat depth can remove some of the model-data discrepancies in SST trends. Regardless whether such adjustments in the environmental parameters during the Holocene are realistic, they indicate that when modeled temperature trends are set up to allow drastic shifts in the ecological behavior of planktonic organisms, they do not capture the full range of reconstructed SST trends. Our findings indicate that climate model and reconstructed temperature trends are to a large degree only qualitatively comparable, thus providing a challenge for the interpretation of proxy data as well as the models' sensitivity to orbital forcing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study which examines the use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction is presented in this paper. We describe not only different sources in the aircraft systems that provide the variables needed to derivate the wind velocity but the capabilities which allow us to present this information for ATM Applications. Based on wind speed samples from aircraft landing at Madrid-Barajas airport, a real-time wind field will be estimated using a data processing approach through a minimum variance method. Finally the accuracy of this procedure will be evaluated for this information to be useful to Air Traffic Control.