944 resultados para Non-linear multiple regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The purpose of this investigation was to determine the contribution of muscle O(2) consumption (mVO2) to pulmonary O(2) uptake (pVO2) during both low-intensity (LI) and high-intensity (HI) knee-extension exercise, and during subsequent recovery, in humans. Seven healthy male subjects (age 20-25 years) completed a series of LI and HI square-wave exercise tests in which mVO2 (direct Fick technique) and pVO2 (indirect calorimetry) were measured simultaneously. The mean blood transit time from the muscle capillaries to the lung (MTTc-l) was also estimated (based on measured blood transit times from femoral artery to vein and vein to artery). The kinetics of mVO2 and pVO2 were modelled using non-linear regression. The time constant (tau) describing the phase II pVO2 kinetics following the onset of exercise was not significantly different from the mean response time (initial time delay + tau) for mVO2 kinetics for LI (30 +/- 3 vs 30 +/- 3 s) but was slightly higher (P < 0.05) for HI (32 +/- 3 vs 29 +/- 4 s); the responses were closely correlated (r = 0.95 and r = 0.95; P < 0.01) for both intensities. In recovery, agreement between the responses was more limited both for LI (36 +/- 4 vs 18 +/- 4 s, P < 0.05; r = -0.01) and HI (33 +/- 3 vs 27 +/- 3 s, P > 0.05; r = -0.40). MTTc-l was approximately 17 s just before exercise and decreased to 12 and 10 s after 5 s of exercise for LI and HI, respectively. These data indicate that the phase II pVO2 kinetics reflect mVO2 kinetics during exercise but not during recovery where caution in data interpretation is advised. Increased mVO2 probably makes a small contribution to during the first 15-20 s of exercise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Running economy (RE), i.e. the oxygen consumption at a given submaximal speed, is an important determinant of endurance running performance. So far, investigators have widely attempted to individuate the factors affecting RE in competitive athletes, focusing mainly on the relationships between RE and running biomechanics. However, the current results are inconsistent and a clear mechanical profile of an economic runner has not been yet established. The present work aimed to better understand how the running technique influences RE in sub-elite middle-distance runners by investigating the biomechanical parameters acting on RE and the underlying mechanisms. Special emphasis was given to accounting for intra-individual variability in RE at different speeds and to assessing track running rather than treadmill running. In Study One, a factor analysis was used to reduce the 30 considered mechanical parameters to few global descriptors of the running mechanics. Then, a biomechanical comparison between economic and non economic runners and a multiple regression analysis (with RE as criterion variable and mechanical indices as independent variables) were performed. It was found that a better RE was associated to higher knee and ankle flexion in the support phase, and that the combination of seven individuated mechanical measures explains ∼72% of the variability in RE. In Study Two, a mathematical model predicting RE a priori from the rate of force production, originally developed and used in the field of comparative biology, was adapted and tested in competitive athletes. The model showed a very good fit (R2=0.86). In conclusion, the results of this dissertation suggest that the very complex interrelationships among the mechanical parameters affecting RE may be successfully dealt with through multivariate statistical analyses and the application of theoretical mathematical models. Thanks to these results, coaches are provided with useful tools to assess the biomechanical profile of their athletes. Thus, individual weaknesses in the running technique may be identified and removed, with the ultimate goal to improve RE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT (italiano) Con crescente attenzione riguardo al problema della sicurezza di ponti e viadotti esistenti nei Paesi Bassi, lo scopo della presente tesi è quello di studiare, mediante la modellazione con Elementi Finiti ed il continuo confronto con risultati sperimentali, la risposta in esercizio di elementi che compongono infrastrutture del genere, ovvero lastre in calcestruzzo armato sollecitate da carichi concentrati. Tali elementi sono caratterizzati da un comportamento ed una crisi per taglio, la cui modellazione è, da un punto di vista computazionale, una sfida piuttosto ardua, a causa del loro comportamento fragile combinato a vari effetti tridimensionali. La tesi è incentrata sull'utilizzo della Sequentially Linear Analysis (SLA), un metodo di soluzione agli Elementi Finiti alternativo rispetto ai classici approcci incrementali e iterativi. Il vantaggio della SLA è quello di evitare i ben noti problemi di convergenza tipici delle analisi non lineari, specificando direttamente l'incremento di danno sull'elemento finito, attraverso la riduzione di rigidezze e resistenze nel particolare elemento finito, invece dell'incremento di carico o di spostamento. Il confronto tra i risultati di due prove di laboratorio su lastre in calcestruzzo armato e quelli della SLA ha dimostrato in entrambi i casi la robustezza del metodo, in termini di accuratezza dei diagrammi carico-spostamento, di distribuzione di tensioni e deformazioni e di rappresentazione del quadro fessurativo e dei meccanismi di crisi per taglio. Diverse variazioni dei più importanti parametri del modello sono state eseguite, evidenziando la forte incidenza sulle soluzioni dell'energia di frattura e del modello scelto per la riduzione del modulo elastico trasversale. Infine è stato effettuato un paragone tra la SLA ed il metodo non lineare di Newton-Raphson, il quale mostra la maggiore affidabilità della SLA nella valutazione di carichi e spostamenti ultimi insieme ad una significativa riduzione dei tempi computazionali. ABSTRACT (english) With increasing attention to the assessment of safety in existing dutch bridges and viaducts, the aim of the present thesis is to study, through the Finite Element modeling method and the continuous comparison with experimental results, the real response of elements that compose these infrastructures, i.e. reinforced concrete slabs subjected to concentrated loads. These elements are characterized by shear behavior and crisis, whose modeling is, from a computational point of view, a hard challenge, due to their brittle behavior combined with various 3D effects. The thesis is focused on the use of Sequentially Linear Analysis (SLA), an alternative solution technique to classical non linear Finite Element analyses that are based on incremental and iterative approaches. The advantage of SLA is to avoid the well-known convergence problems of non linear analyses by directly specifying a damage increment, in terms of a reduction of stiffness and strength in the particular finite element, instead of a load or displacement increment. The comparison between the results of two laboratory tests on reinforced concrete slabs and those obtained by SLA has shown in both the cases the robustness of the method, in terms of accuracy of load-displacements diagrams, of the distribution of stress and strain and of the representation of the cracking pattern and of the shear failure mechanisms. Different variations of the most important parameters have been performed, pointing out the strong incidence on the solutions of the fracture energy and of the chosen shear retention model. At last a confrontation between SLA and the non linear Newton-Raphson method has been executed, showing the better reliability of the SLA in the evaluation of the ultimate loads and displacements, together with a significant reduction of computational times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The upgrade of the CERN accelerator complex has been planned in order to further increase the LHC performances in exploring new physics frontiers. One of the main limitations to the upgrade is represented by the collective instabilities. These are intensity dependent phenomena triggered by electromagnetic fields excited by the interaction of the beam with its surrounding. These fields are represented via wake fields in time domain or impedances in frequency domain. Impedances are usually studied assuming ultrarelativistic bunches while we mainly explored low and medium energy regimes in the LHC injector chain. In a non-ultrarelativistic framework we carried out a complete study of the impedance structure of the PSB which accelerates proton bunches up to 1.4 GeV. We measured the imaginary part of the impedance which creates betatron tune shift. We introduced a parabolic bunch model which together with dedicated measurements allowed us to point to the resistive wall impedance as the source of one of the main PSB instability. These results are particularly useful for the design of efficient transverse instability dampers. We developed a macroparticle code to study the effect of the space charge on intensity dependent instabilities. Carrying out the analysis of the bunch modes we proved that the damping effects caused by the space charge, which has been modelled with semi-analytical method and using symplectic high order schemes, can increase the bunch intensity threshold. Numerical libraries have been also developed in order to study, via numerical simulations of the bunches, the impedance of the whole CERN accelerator complex. On a different note, the experiment CNGS at CERN, requires high-intensity beams. We calculated the interpolating Hamiltonian of the beam for highly non-linear lattices. These calculations provide the ground for theoretical and numerical studies aiming to improve the CNGS beam extraction from the PS to the SPS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development and the growth of plants is strongly affected by the interactions between roots, rootrnassociated organisms and rhizosphere communities. Methods to assess such interactions are hardly torndevelop particularly in perennial and woody plants, due to their complex root system structure and theirrntemporal change in physiology patterns. In this respect, grape root systems are not investigated veryrnwell. The aim of the present work was the development of a method to assess and predict interactionsrnat the root system of rootstocks (Vitis berlandieri x Vitis riparia) in field. To achieve this aim, grapernphylloxera (Daktulosphaira vitifoliae Fitch, Hemiptera, Aphidoidea) was used as a graperoot parasitizingrnmodel.rnTo develop the methodical approach, a longt-term trial (2006-2009) was arranged on a commercial usedrnvineyard in Geisenheim/Rheingau. All 2 to 8 weeks the top most 20 cm of soil under the foliage wallrnwere investigated and root material was extracted (n=8-10). To include temporal, spatial and cultivarrnspecific root system dynamics, the extracted root material was analyzed digitally on the morphologicalrnproperties. The grape phylloxera population was quantified and characterized visually on base of theirrnlarvalstages (oviparous, non oviparous and winged preliminary stages). Infection patches (nodosities)rnwere characterized visually as well, partly supported by digital root color analyses. Due to the knownrneffects of fungal endophytes on the vitality of grape phylloxera infested grapevines, fungal endophytesrnwere isolated from nodosity and root tissue and characterized (morphotypes) afterwards. Further abioticrnand biotic soil conditions of the vineyards were assessed. The temporal, spatial and cultivar specificrnsensitivity of single parameters were analyzed by omnibus tests (ANOVAs) and adjacent post-hoc tests.rnThe relations between different parameters were analyzed by multiple regression models.rnQuantitative parameters to assess the degeneration of nodosity, the development nodosity attachedrnroots and to differentiate between nodosities and other root swellings in field were developed. Significantrndifferences were shown between root dynamic including parameters and root dynamic ignoringrnparameters. Regarding the description of grape phylloxera population and root system dynamic, thernmethod showed a high temporal, spatial and cultivar specific sensitivity. Further, specific differencesrncould be shown in the frequency of endophyte morphotypes between root and nodosity tissue as wellrnas between cultivars. Degeneration of nodosities as well as nodosity occupation rates could be relatedrnto the calculated abundances of grape phylloxera population. Further ecological questions consideringrngrape root development (e.g. relation between moisture and root development) and grape phylloxerarnpopulation development (e.g. relation between temperature and population structure) could be answeredrnfor field conditions.rnGenerally, the presented work provides an approach to evaluate vitality of grape root systems. Thisrnapproach can be useful, considering the development of control strategies against soilborne pests inrnviticulture (e.g. grape phylloxera, Sorospheara viticola, Roesleria subterranea (Weinm.) Redhaed) as well as considering the evaluation of integrated management systems in viticulture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der erste Teil der vorliegenden Dissertation befasst sich mit der Untersuchung der perturbativen Unitarität im Komplexe-Masse-Renormierungsschema (CMS). Zu diesem Zweck wird eine Methode zur Berechnung der Imaginärteile von Einschleifenintegralen mit komplexen Massenparametern vorgestellt, die im Grenzfall stabiler Teilchen auf die herkömmlichen Cutkosky-Formeln führt. Anhand einer Modell-Lagrangedichte für die Wechselwirkung eines schweren Vektorbosons mit einem leichten Fermion wird demonstriert, dass durch Anwendung des CMS die Unitarität der zugrunde liegenden S-Matrix im störungstheoretischen Sinne erfüllt bleibt, sofern die renormierte Kopplungskonstante reell gewählt wird. Der zweite Teil der Arbeit beschäftigt sich mit verschiedenen Anwendungen des CMS in chiraler effektiver Feldtheorie (EFT). Im Einzelnen werden Masse und Breite der Deltaresonanz, die elastischen elektromagnetischen Formfaktoren der Roperresonanz, die elektromagnetischen Formfaktoren des Übergangs vom Nukleon zur Roperresonanz sowie Pion-Nukleon-Streuung und Photo- und Elektropionproduktion für Schwerpunktsenergien im Bereich der Roperresonanz berechnet. Die Wahl passender Renormierungsbedingungen ermöglicht das Aufstellen eines konsistenten chiralen Zählschemas für EFT in Anwesenheit verschiedener resonanter Freiheitsgrade, so dass die aufgeführten Prozesse in Form einer systematischen Entwicklung nach kleinen Parametern untersucht werden können. Die hier erzielten Resultate können für Extrapolationen von entsprechenden Gitter-QCD-Simulationen zum physikalischen Wert der Pionmasse genutzt werden. Deshalb wird neben der Abhängigkeit der Formfaktoren vom quadrierten Impulsübertrag auch die Pionmassenabhängigkeit des magnetischen Moments und der elektromagnetischen Radien der Roperresonanz untersucht. Im Rahmen der Pion-Nukleon-Streuung und der Photo- und Elektropionproduktion werden eine Partialwellenanalyse und eine Multipolzerlegung durchgeführt, wobei die P11-Partialwelle sowie die Multipole M1- und S1- mittels nichtlinearer Regression an empirische Daten angepasst werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cardiotocography (CTG) is a widespread foetal diagnostic methods. However, it lacks of objectivity and reproducibility since its dependence on observer's expertise. To overcome these limitations, more objective methods for CTG interpretation have been proposed. In particular, many developed techniques aim to assess the foetal heart rate variability (FHRV). Among them, some methodologies from nonlinear systems theory have been applied to the study of FHRV. All the techniques have proved to be helpful in specific cases. Nevertheless, none of them is more reliable than the others. Therefore, an in-depth study is necessary. The aim of this work is to deepen the FHRV analysis through the Symbolic Dynamics Analysis (SDA), a nonlinear technique already successfully employed for HRV analysis. Thanks to its simplicity of interpretation, it could be a useful tool for clinicians. We performed a literature study involving about 200 references on HRV and FHRV analysis; approximately 100 works were focused on non-linear techniques. Then, in order to compare linear and non-linear methods, we carried out a multiparametric study. 580 antepartum recordings of healthy fetuses were examined. Signals were processed using an updated software for CTG analysis and a new developed software for generating simulated CTG traces. Finally, statistical tests and regression analyses were carried out for estimating relationships among extracted indexes and other clinical information. Results confirm that none of the employed techniques is more reliable than the others. Moreover, in agreement with the literature, each analysis should take into account two relevant parameters, the foetal status and the week of gestation. Regarding the SDA, results show its promising capabilities in FHRV analysis. It allows recognizing foetal status, gestation week and global variability of FHR signals, even better than other methods. Nevertheless, further studies, which should involve even pathological cases, are necessary to establish its reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study was arranged to manifest its objectives through preceding it with an intro-duction. Particular attention was paid in the second part to detect the physical settings of the study area, together with an attempt to show the climatic characteristics in Libya. In the third part, observed temporal and spatial climate change in Libya was investigated through the trends of temperature, precipitation, relative humidity and cloud amount over the peri-ods (1946-2000), (1946-1975), and (1976-2000), comparing the results with the global scales. The forth part detected the natural and human causes of climate change concentrat-ing on the greenhouse effect. The potential impacts of climate change on Libya were ex-amined in the fifth chapter. As a case study, desertification of Jifara Plain was studied in the sixth part. In the seventh chapter, projections and mitigations of climate change and desertification were discussed. Ultimately, the main results and recommendations of the study were summarized. In order to carry through the objectives outlined above, the following methods and approaches were used: a simple linear regression analysis was computed to detect the trends of climatic parameters over time; a trend test based on a trend-to-noise-ratio was applied for detecting linear or non-linear trends; the non-parametric Mann-Kendall test for trend was used to reveal the behavior of the trends and their significance; PCA was applied to construct the all-Libya climatic parameters trends; aridity index after Walter-Lieth was shown for computing humid respectively arid months in Libya; correlation coefficient, (after Pearson) for detecting the teleconnection between sun spot numbers, NAOI, SOI, GHGs, and global warming, climate changes in Libya; aridity index, after De Martonne, to elaborate the trends of aridity in Jifara Plain; Geographical Information System and Re-mote Sensing techniques were applied to clarify the illustrations and to monitor desertifi-cation of Jifara Plain using the available satellite images MSS, TM, ETM+ and Shuttle Radar Topography Mission (SRTM). The results are explained by 88 tables, 96 figures and 10 photos. Temporal and spatial temperature changes in Libya indicated remarkably different an-nual and seasonal trends over the long observation period 1946-2000 and the short obser-vation periods 1946-1975 and 1976-2000. Trends of mean annual temperature were posi-tive at all study stations except at one from 1946-2000, negative trends prevailed at most stations from 1946-1975, while strongly positive trends were computed at all study stations from 1976-2000 corresponding with the global warming trend. Positive trends of mean minimum temperatures were observed at all reference stations from 1946-2000 and 1976-2000, while negative trends prevailed at most stations over the period 1946-1975. For mean maximum temperature, positive trends were shown from 1946-2000 and from 1976-2000 at most stations, while most trends were negative from 1946-1975. Minimum tem-peratures increased at nearly more than twice the rate of maximum temperatures at most stations. In respect of seasonal temperature, warming mostly occurred in summer and au-tumn in contrast to the global observations identifying warming mostly in winter and spring in both study periods. Precipitation across Libya is characterized by scanty and sporadically totals, as well as high intensities and very high spatial and temporal variabilities. From 1946-2000, large inter-annual and intra-annual variabilities were observed. Positive trends of annual precipi-tation totals have been observed from 1946-2000, negative trends from 1976-2000 at most stations. Variabilities of seasonal precipitation over Libya are more strikingly experienced from 1976-2000 than from 1951-1975 indicating a growing magnitude of climate change in more recent times. Negative trends of mean annual relative humidity were computed at eight stations, while positive trends prevailed at seven stations from 1946-2000. For the short observation period 1976-2000, positive trends were computed at most stations. Annual cloud amount totals decreased at most study stations in Libya over both long and short periods. Re-markably large spatial variations of climate changes were observed from north to south over Libya. Causes of climate change were discussed showing high correlation between tempera-ture increasing over Libya and CO2 emissions; weakly positive correlation between pre-cipitation and North Atlantic Oscillation index; negative correlation between temperature and sunspot numbers; negative correlation between precipitation over Libya and Southern Oscillation Index. The years 1992 and 1993 were shown as the coldest in the 1990s result-ing from the eruption of Mount Pinatubo, 1991. Libya is affected by climate change in many ways, in particular, crop production and food security, water resources, human health, population settlement and biodiversity. But the effects of climate change depend on its magnitude and the rate with which it occurs. Jifara Plain, located in northwestern Libya, has been seriously exposed to desertifica-tion as a result of climate change, landforms, overgrazing, over-cultivation and population growth. Soils have been degraded, vegetation cover disappeared and the groundwater wells were getting dry in many parts. The effect of desertification on Jifara Plain appears through reducing soil fertility and crop productivity, leading to long-term declines in agri-cultural yields, livestock yields, plant standing biomass, and plant biodiversity. Desertifi-cation has also significant implications on livestock industry and the national economy. Desertification accelerates migration from rural and nomadic areas to urban areas as the land cannot support the original inhabitants. In the absence of major shifts in policy, economic growth, energy prices, and con-sumer trends, climate change in Libya and desertification of Jifara Plain are expected to continue in the future. Libya cooperated with United Nations and other international organizations. It has signed and ratified a number of international and regional agreements which effectively established a policy framework for actions to mitigate climate change and combat deserti-fication. Libya has implemented several laws and legislative acts, with a number of ancil-lary and supplementary rules to regulate. Despite the current efforts and ongoing projects being undertaken in Libya in the field of climate change and desertification, urgent actions and projects are needed to mitigate climate change and combat desertification in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Klimamontoring benötigt eine operative, raum-zeitliche Analyse der Klimavariabilität. Mit dieser Zielsetzung, funktionsbereite Karten regelmäßig zu erstellen, ist es hilfreich auf einen Blick, die räumliche Variabilität der Klimaelemente in der zeitlichen Veränderungen darzustellen. Für aktuelle und kürzlich vergangene Jahre entwickelte der Deutsche Wetterdienst ein Standardverfahren zur Erstellung solcher Karten. Die Methode zur Erstellung solcher Karten variiert für die verschiedenen Klimaelemente bedingt durch die Datengrundlage, die natürliche Variabilität und der Verfügbarkeit der in-situ Daten.rnIm Rahmen der Analyse der raum-zeitlichen Variabilität innerhalb dieser Dissertation werden verschiedene Interpolationsverfahren auf die Mitteltemperatur der fünf Dekaden der Jahre 1951-2000 für ein relativ großes Gebiet, der Region VI der Weltorganisation für Meteorologie (Europa und Naher Osten) angewendet. Die Region deckt ein relativ heterogenes Arbeitsgebiet von Grönland im Nordwesten bis Syrien im Südosten hinsichtlich der Klimatologie ab.rnDas zentrale Ziel der Dissertation ist eine Methode zur räumlichen Interpolation der mittleren Dekadentemperaturwerte für die Region VI zu entwickeln. Diese Methode soll in Zukunft für die operative monatliche Klimakartenerstellung geeignet sein. Diese einheitliche Methode soll auf andere Klimaelemente übertragbar und mit der entsprechenden Software überall anwendbar sein. Zwei zentrale Datenbanken werden im Rahmen dieser Dissertation verwendet: So genannte CLIMAT-Daten über dem Land und Schiffsdaten über dem Meer.rnIm Grunde wird die Übertragung der Punktwerte der Temperatur per räumlicher Interpolation auf die Fläche in drei Schritten vollzogen. Der erste Schritt beinhaltet eine multiple Regression zur Reduktion der Stationswerte mit den vier Einflussgrößen der Geographischen Breite, der Höhe über Normalnull, der Jahrestemperaturamplitude und der thermischen Kontinentalität auf ein einheitliches Niveau. Im zweiten Schritt werden die reduzierten Temperaturwerte, so genannte Residuen, mit der Interpolationsmethode der Radialen Basis Funktionen aus der Gruppe der Neuronalen Netzwerk Modelle (NNM) interpoliert. Im letzten Schritt werden die interpolierten Temperaturraster mit der Umkehrung der multiplen Regression aus Schritt eins mit Hilfe der vier Einflussgrößen auf ihr ursprüngliches Niveau hochgerechnet.rnFür alle Stationswerte wird die Differenz zwischen geschätzten Wert aus der Interpolation und dem wahren gemessenen Wert berechnet und durch die geostatistische Kenngröße des Root Mean Square Errors (RMSE) wiedergegeben. Der zentrale Vorteil ist die wertegetreue Wiedergabe, die fehlende Generalisierung und die Vermeidung von Interpolationsinseln. Das entwickelte Verfahren ist auf andere Klimaelemente wie Niederschlag, Schneedeckenhöhe oder Sonnenscheindauer übertragbar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Far from being static transmission units, synapses are highly dynamical elements that change over multiple time scales depending on the history of the neural activity of both the pre- and postsynaptic neuron. Moreover, synaptic changes on different time scales interact: long-term plasticity (LTP) can modify the properties of short-term plasticity (STP) in the same synapse. Most existing theories of synaptic plasticity focus on only one of these time scales (either STP or LTP or late-LTP) and the theoretical principles underlying their interactions are thus largely unknown. Here we develop a normative model of synaptic plasticity that combines both STP and LTP and predicts specific patterns for their interactions. Recently, it has been proposed that STP arranges for the local postsynaptic membrane potential at a synapse to behave as an optimal estimator of the presynaptic membrane potential based on the incoming spikes. Here we generalize this approach by considering an optimal estimator of a non-linear function of the membrane potential and the long-term synaptic efficacy—which itself may be subject to change on a slower time scale. We find that an increase in the long-term synaptic efficacy necessitates changes in the dynamics of STP. More precisely, for a realistic non-linear function to be estimated, our model predicts that after the induction of LTP, causing long-term synaptic efficacy to increase, a depressing synapse should become even more depressing. That is, in a protocol using trains of presynaptic stimuli, as the initial EPSP becomes stronger due to LTP, subsequent EPSPs should become weakened and this weakening should be more pronounced with LTP. This form of redistribution of synaptic efficacies agrees well with electrophysiological data on synapses connecting layer 5 pyramidal neurons.