925 resultados para Averaging
Resumo:
Four castrated crossbred horses were used in a randomized block design to study the use of indigestible internal markers iNDF and iADF obtained in situ (from bovines) or in vivo (from equines). Treatments consisted of determining digestibility by the direct method comprising total feces collection (TC) and by the indirect method comprising internal markers iNDF and iADF obtained by in situ incubation in bovine rumen or in vivo by the mobile nylon bag (MNB) technique with horses. iNDF-IV and iADF-IV resulted in better marker recovery rate (RR) (91.50%), similar to TC. The in situ technique resulted in lower RR values for the two indigestible markers, averaging 86.50% (p < 0.05). Estimates of the nutrient coefficient of digestibility (CD) were adequately predicted by iADF-IV, for horses fed on hay exclusively, with rates 46.41, 48.16, 47.92 and 45.51% for dry matter (DM), organic matter (OM), FDN and gross energy, respectively. Results show that MNB may be used to obtain iADF in horses fed on coast-cross hay exclusively, whereas NDFi and ADFi were selected for horses fed on mixed diets to predict the coefficient of nutrient digestibility.
Resumo:
Abstract Background Effective malaria control relies on accurate identification of those Anopheles mosquitoes responsible for the transmission of Plasmodium parasites. Anopheles oswaldoi s.l. has been incriminated as a malaria vector in Colombia and some localities in Brazil, but not ubiquitously throughout its Neotropical range. This evidence together with variable morphological characters and genetic differences supports that An. oswaldoi s.l. compromises a species complex. The recent fully integrated redescription of An. oswaldoi s.s. provides a solid taxonomic foundation from which to molecularly determine other members of the complex. Methods DNA sequences of the Second Internal Transcribed Spacer (ITS2 - rDNA) (n = 192) and the barcoding region of the Cytochrome Oxidase I gene (COI - mtDNA) (n = 110) were generated from 255 specimens of An. oswaldoi s.l. from 33 localities: Brazil (8 localities, including the lectotype series of An. oswaldoi), Ecuador (4), Colombia (17), Trinidad and Tobago (1), and Peru (3). COI sequences were analyzed employing the Kimura-two-parameter model (K2P), Bayesian analysis (MrBayes), Mixed Yule-Coalescent model (MYC, for delimitation of clusters) and TCS genealogies. Results Separate and combined analysis of the COI and ITS2 data sets unequivocally supported four separate species: two previously determined (An. oswaldoi s.s. and An. oswaldoi B) and two newly designated species in the Oswaldoi Complex (An. oswaldoi A and An. sp. nr. konderi). The COI intra- and inter-specific genetic distances for the four taxa were non-overlapping, averaging 0.012 (0.007 to 0.020) and 0.052 (0.038 to 0.064), respectively. The concurring four clusters delineated by MrBayes and MYC, and four independent TCS networks, strongly confirmed their separate species status. In addition, An. konderi of Sallum should be regarded as unique with respect to the above. Despite initially being included as an outgroup taxon, this species falls well within the examined taxa, suggesting a combined analysis of these taxa would be most appropriate. Conclusions: Through novel data and retrospective comparison of available COI and ITS2 DNA sequences, evidence is shown to support the separate species status of An. oswaldoi s.s., An. oswaldoi A and An. oswaldoi B, and at least two species in the closely related An. konderi complex (An. sp. nr. konderi, An. konderi of Sallum). Although An. oswaldoi s.s. has never been implicated in malaria transmission, An. oswaldoi B is a confirmed vector and the new species An. oswaldoi A and An. sp. nr. konderi are circumstantially implicated, most likely acting as secondary vectors.
Resumo:
The Large Scale Biosphere Atmosphere Experiment in Amazonia (LBA) is a long term (20 years) research effort aimed at the understanding of the functioning of the Amazonian ecosystem. In particular, the strong biosphere-atmosphere interaction is a key component looking at the exchange processes between vegetation and the atmosphere, focusing on aerosol particles. Two aerosol components are the most visible: The natural biogenic emissions of aerosols and VOCs, and the biomass burning emissions. A large effort was done to characterize natural biogenic aerosols that showed detailed organic characterization and optical properties. The biomass burning component in Amazonia is important in term of aerosol and trace gases emissions, with deforestation rates decreasing, from 27,000 Km2 in 2004 to about 5,000 Km2 in 2011. Biomass burning emissions in Amazonia increases concentrations of aerosol particles, CO, ozone and other species, and also change the surface radiation balance in a significant way. Long term monitoring of aerosols and trace gases were performed in two sites: a background site in Central Amazonia, 55 Km North of Manaus (called ZF2 ecological reservation) and a monitoring station in Porto Velho, Rondonia state, a site heavily impacted by biomass burning smoke. Several instruments were operated to measured aerosol size distribution, optical properties (absorption and scattering at several wavelengths), composition of organic (OC/EC) and inorganic components among other measurements. AERONET and MODIS measurements from 5 long term sites show a large year-to year variability due to climatic and socio-economic issues. Aerosol optical depths of more than 4 at 550nm was observed frequently over biomass burning areas. In the pristine Amazonian atmosphere, aerosol scattering coefficients ranged between 1 and 200 Mm-1 at 450 nm, while absorption ranged between 1 and 20 Mm-1 at 637 nm. A strong seasonal behavior was observed, with greater aerosol loadings during the dry season (Jul-Nov) as compared to the wet season (Dec-Jun). During the wet season in Manaus, aerosol scattering (450 nm) and absorption (637 nm) coefficients averaged, respectively, 14 and 0.9 Mm-1. Angstrom exponents for scattering were lower during the wet season (1.6) in comparison to the dry season (1.9), which is consistent with the shift from biomass burning aerosols, predominant in the fine mode, to biogenic aerosols, predominant in the coarse mode. Single scattering albedo, calculated at 637 nm, did not show a significant seasonal variation, averaging 0.86. In Porto Velho, even in the wet season it was possible to observe an impact from anthropogenic aerosol. Black Carbon was measured at a high 20 ug/m³ in the dry season, showing strong aerosol absorption. This work presents a general description of the aerosol optical properties in Amazonia, both during the Amazonian wet and dry seasons.
Resumo:
This work is structured as follows: In Section 1 we discuss the clinical problem of heart failure. In particular, we present the phenomenon known as ventricular mechanical dyssynchrony: its impact on cardiac function, the therapy for its treatment and the methods for its quantification. Specifically, we describe the conductance catheter and its use for the measurement of dyssynchrony. At the end of the Section 1, we propose a new set of indexes to quantify the dyssynchrony that are studied and validated thereafter. In Section 2 we describe the studies carried out in this work: we report the experimental protocols, we present and discuss the results obtained. Finally, we report the overall conclusions drawn from this work and we try to envisage future works and possible clinical applications of our results. Ancillary studies that were carried out during this work mainly to investigate several aspects of cardiac resynchronization therapy (CRT) are mentioned in Appendix. -------- Ventricular mechanical dyssynchrony plays a regulating role already in normal physiology but is especially important in pathological conditions, such as hypertrophy, ischemia, infarction, or heart failure (Chapter 1,2.). Several prospective randomized controlled trials supported the clinical efficacy and safety of cardiac resynchronization therapy (CRT) in patients with moderate or severe heart failure and ventricular dyssynchrony. CRT resynchronizes ventricular contraction by simultaneous pacing of both left and right ventricle (biventricular pacing) (Chapter 1.). Currently, the conductance catheter method has been used extensively to assess global systolic and diastolic ventricular function and, more recently, the ability of this instrument to pick-up multiple segmental volume signals has been used to quantify mechanical ventricular dyssynchrony. Specifically, novel indexes based on volume signals acquired with the conductance catheter were introduced to quantify dyssynchrony (Chapter 3,4.). Present work was aimed to describe the characteristics of the conductancevolume signals, to investigate the performance of the indexes of ventricular dyssynchrony described in literature and to introduce and validate improved dyssynchrony indexes. Morevoer, using the conductance catheter method and the new indexes, the clinical problem of the ventricular pacing site optimization was addressed and the measurement protocol to adopt for hemodynamic tests on cardiac pacing was investigated. In accordance to the aims of the work, in addition to the classical time-domain parameters, a new set of indexes has been extracted, based on coherent averaging procedure and on spectral and cross-spectral analysis (Chapter 4.). Our analyses were carried out on patients with indications for electrophysiologic study or device implantation (Chapter 5.). For the first time, besides patients with heart failure, indexes of mechanical dyssynchrony based on conductance catheter were extracted and studied in a population of patients with preserved ventricular function, providing information on the normal range of such a kind of values. By performing a frequency domain analysis and by applying an optimized coherent averaging procedure (Chapter 6.a.), we were able to describe some characteristics of the conductance-volume signals (Chapter 6.b.). We unmasked the presence of considerable beat-to-beat variations in dyssynchrony that seemed more frequent in patients with ventricular dysfunction and to play a role in discriminating patients. These non-recurrent mechanical ventricular non-uniformities are probably the expression of the substantial beat-to-beat hemodynamic variations, often associated with heart failure and due to cardiopulmonary interaction and conduction disturbances. We investigated how the coherent averaging procedure may affect or refine the conductance based indexes; in addition, we proposed and tested a new set of indexes which quantify the non-periodic components of the volume signals. Using the new set of indexes we studied the acute effects of the CRT and the right ventricular pacing, in patients with heart failure and patients with preserved ventricular function. In the overall population we observed a correlation between the hemodynamic changes induced by the pacing and the indexes of dyssynchrony, and this may have practical implications for hemodynamic-guided device implantation. The optimal ventricular pacing site for patients with conventional indications for pacing remains controversial. The majority of them do not meet current clinical indications for CRT pacing. Thus, we carried out an analysis to compare the impact of several ventricular pacing sites on global and regional ventricular function and dyssynchrony (Chapter 6.c.). We observed that right ventricular pacing worsens cardiac function in patients with and without ventricular dysfunction unless the pacing site is optimized. CRT preserves left ventricular function in patients with normal ejection fraction and improves function in patients with poor ejection fraction despite no clinical indication for CRT. Moreover, the analysis of the results obtained using new indexes of regional dyssynchrony, suggests that pacing site may influence overall global ventricular function depending on its relative effects on regional function and synchrony. Another clinical problem that has been investigated in this work is the optimal right ventricular lead location for CRT (Chapter 6.d.). Similarly to the previous analysis, using novel parameters describing local synchrony and efficiency, we tested the hypothesis and we demonstrated that biventricular pacing with alternative right ventricular pacing sites produces acute improvement of ventricular systolic function and improves mechanical synchrony when compared to standard right ventricular pacing. Although no specific right ventricular location was shown to be superior during CRT, the right ventricular pacing site that produced the optimal acute hemodynamic response varied between patients. Acute hemodynamic effects of cardiac pacing are conventionally evaluated after stabilization episodes. The applied duration of stabilization periods in most cardiac pacing studies varied considerably. With an ad hoc protocol (Chapter 6.e.) and indexes of mechanical dyssynchrony derived by conductance catheter we demonstrated that the usage of stabilization periods during evaluation of cardiac pacing may mask early changes in systolic and diastolic intra-ventricular dyssynchrony. In fact, at the onset of ventricular pacing, the main dyssynchrony and ventricular performance changes occur within a 10s time span, initiated by the changes in ventricular mechanical dyssynchrony induced by aberrant conduction and followed by a partial or even complete recovery. It was already demonstrated in normal animals that ventricular mechanical dyssynchrony may act as a physiologic modulator of cardiac performance together with heart rate, contractile state, preload and afterload. The present observation, which shows the compensatory mechanism of mechanical dyssynchrony, suggests that ventricular dyssynchrony may be regarded as an intrinsic cardiac property, with baseline dyssynchrony at increased level in heart failure patients. To make available an independent system for cardiac output estimation, in order to confirm the results obtained with conductance volume method, we developed and validated a novel technique to apply the Modelflow method (a method that derives an aortic flow waveform from arterial pressure by simulation of a non-linear three-element aortic input impedance model, Wesseling et al. 1993) to the left ventricular pressure signal, instead of the arterial pressure used in the classical approach (Chapter 7.). The results confirmed that in patients without valve abnormalities, undergoing conductance catheter evaluations, the continuous monitoring of cardiac output using the intra-ventricular pressure signal is reliable. Thus, cardiac output can be monitored quantitatively and continuously with a simple and low-cost method. During this work, additional studies were carried out to investigate several areas of uncertainty of CRT. The results of these studies are briefly presented in Appendix: the long-term survival in patients treated with CRT in clinical practice, the effects of CRT in patients with mild symptoms of heart failure and in very old patients, the limited thoracotomy as a second choice alternative to transvenous implant for CRT delivery, the evolution and prognostic significance of diastolic filling pattern in CRT, the selection of candidates to CRT with echocardiographic criteria and the prediction of response to the therapy.
Resumo:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Die vorliegende Dissertation beinhaltet Anwendungen der Quantenchemie und methodische Entwicklungen im Bereich der "Coupled-Cluster"-Theorie zu den folgenden Themen: 1.) Die Bestimmung von Geometrieparametern in wasserstoffverbrückten Komplexen mit Pikometer-Genauigkeit durch Kopplung von NMR-Experimenten und quantenchemischen Rechnungen wird an zwei Beispielen dargelegt. 2.) Die hierin auftretenden Unterschiede in Theorie und Experiment werden diskutiert. Hierzu wurde die Schwingungsmittelung des Dipolkopplungstensors implementiert, um Nullpunkt-Effekte betrachten zu können. 3.) Ein weiterer Aspekt der Arbeit behandelt die Strukturaufklärung an diskotischen Flüssigkristallen. Die quantenchemische Modellbildung und das Zusammenspiel mit experimentellen Methoden, vor allem der Festkörper-NMR, wird vorgestellt. 4.) Innerhalb dieser Arbeit wurde mit der Parallelisierung des Quantenchemiepaketes ACESII begonnen. Die grundlegende Strategie und erste Ergebnisse werden vorgestellt. 5.) Zur Skalenreduktion des CCCSD(T)-Verfahrens durch Faktorisierung wurden verschiedene Zerlegungen des Energienenners getestet. Ein sich hieraus ergebendes Verfahren zur Berechnung der CCSD(T)-Energie wurde implementiert. 6.) Die Reaktionsaufklärung der Bildung von HSOH aus di-tert-Butyl-Sulfoxid wird vorgestellt. Dazu wurde die Thermodynamik der Reaktionsschritte mit Methoden der Quantenchemie berechnet.
Resumo:
This dissertation presents for the first time a survey of bird pollinated (ornithophilous) Salvia species. Within the approximately 1000 species of the worldwide distributed genus roughly 20% (186 spp.) are bird pollinated. Excepting four species in the Old World (South Africa and Madagascar), ornithophilous species are restricted to the New World where they represent about one third of the species. They occur mainly in higher altitudes (1500-3000m) and usually grow as shrubs or perennial herbs (97%). The bilabiate to tubular flowers are often red (at least 49%), averaging 35mm (7-130mm) in length and produce a large to medium volume of nectar with rather low sugar concentration. Pollination by sunbirds and white-eyes is documented in a South African species, and that by hummingbirds in 16 species of the New World (USA, Mexico, Guatemala and Bolivia). Beside pollinator observations, the functionality of the staminal levers, the process of pollen transfer and the fitting between flowers and birds are tested by inserting museum skins and metal rods into fresh flowers. The most surprising result is the finding of two different main pollen transfer mechanisms. In at least 54% of the species an active staminal lever mechanism enables pollen deposition on the birds body. This is illustrated in detail in the South African S. lanceolata at which birds were observed to release the lever mechanism and became dusted with pollen. In contrast, the lever mechanism in about 35% of the New World species is reduced in different ways. Pollen transfer by inactive ‘levers’ is demonstrated in detail in S. haenkei in Bolivia, at which four pollinating hummingbird species could be observed. The tubular corolla forced the birds in a specific position, thereby causing pollen transfer from the exserted pollen-sacs to the birds body. With respect to the floral diversity and systematic affiliation of the species, parallel evolution of ornithophily and lever reduction is likely. Considering that bird pollinated species might have derived from bee pollinated species and that the staminal levers have become secondarily inactive, it is concluded that the shift in pollinators induced phenotypic changes even disabling such a sophisticated structure as the staminal lever mechanism.
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
This thesis presents a comparative developmental study of inflorescences and focuses on the production of the terminal flower (TF). Morphometric attributes of inflorescence meristems (IM) were obtained throughout the ontogeny of inflorescence buds with the aim of describing possible spatial constraints that could explain the failure in developing the TF. The study exposes the inflorescence ontogeny of 20 species from five families of the Eudicots (Berberidaceae, Papaveraceae-Fumarioideae, Rosaceae, Campanulaceae and Apiaceae) in which 745 buds of open (i.e. without TF) and closed (i.e. with TF) inflorescences were observed under the scanning electron microscope.rnThe study shows that TFs appear on IMs which are 2,75 (se = 0,38) times larger than the youngest lateral reproductive primordium. The shape of these IMs is characterized by a leaf arc (phyllotactic attribute) of 91,84° (se = 7,32) and a meristematic elevation of 27,93° (se = 5,42). IMs of open inflorescences show a significant lower relative surface, averaging 1,09 (se=0,26) times the youngest primordium size, which suggests their incapacity for producing TFs. The relative lower size of open IMs is either a condition throughout the complete ontogeny (‘open I’) or a result from the drastic reduction of the meristematic surface after flower segregation (‘open II’). rnIt is concluded that a suitable bulge configuration of the IM is a prerequisite for TF formation. Observations in the TF-facultative species Daucus carota support this view, as the absence of the TF in certain umbellets is correlated with a reduction of their IM dimensions. A review of literature regarding histological development of IMs and genetic regulation of inflorescences suggests that in ‘open I’ inflorescences, the histological composition and molecular activity at the tip of the IM could impede the TF differentiation. On the other side, in ‘open II’ inflorescences, the small final IM bulge could represent a spatial constraint that hinders the differentiation of the TF. The existence of two distinct kinds of ontogenies of open inflorescences suggests two ways in which the loss of the TF could have occurred in the course of evolution.rn
Resumo:
The dissertation contains five parts: An introduction, three major chapters, and a short conclusion. The First Chapter starts from a survey and discussion of the studies on corporate law and financial development literature. The commonly used methods in these cross-sectional analyses are biased as legal origins are no longer valid instruments. Hence, the model uncertainty becomes a salient problem. The Bayesian Model Averaging algorithm is applied to test the robustness of empirical results in Djankov et al. (2008). The analysis finds that their constructed legal index is not robustly correlated with most of the various stock market outcome variables. The second Chapter looks into the effects of minority shareholders protection in corporate governance regime on entrepreneurs' ex ante incentives to undertake IPO. Most of the current literature focuses on the beneficial part of minority shareholder protection on valuation, while overlooks its private costs on entrepreneur's control. As a result, the entrepreneur trade-offs the costs of monitoring with the benefits of cheap sources of finance when minority shareholder protection improves. The theoretical predictions are empirically tested using panel data and GMM-sys estimator. The third Chapter investigates the corporate law and corporate governance reform in China. The corporate law in China regards shareholder control as the means to the ends of pursuing the interests of stakeholders, which is inefficient. The Chapter combines the recent development of theories of the firm, i.e., the team production theory and the property rights theory, to solve such problem. The enlightened shareholder value, which emphasizes on the long term valuation of the firm, should be adopted as objectives of listed firms. In addition, a move from the mandatory division of power between shareholder meeting and board meeting to the default regime, is proposed.
Resumo:
Um die in der Atmosphäre ablaufenden Prozesse besser verstehen zu können, ist es wichtig dort vorhandene Partikel gut charakterisieren zu können. Dazu gehört unter anderem die Bestimmung der chemischen Zusammensetzung der Partikel. Zur Analyse insbesondere organischer Partikel wurde dazu in einer früheren Promotion das Aerosol-Ionenfallen-Massenspektrometer (AIMS) entwickelt.Im Rahmen dieser Arbeit wurden Entwicklungsarbeiten durchgeführt, um die Charakteristiken des Prototypen zu verbessern sowie es für den Feldeinsatz tauglich zu machen. Die durchgeführten Veränderungen betreffen mechanische und elektrische Komponenten sowie das LabView Steuerungsprogramm. So wurde z.B. die Ionenquelle derart modifiziert, dass die Ionen nicht mehr permanent erzeugt werden, sondern nur innerhalb des Zeitraums wenn sie auch in der Ionenfalle gespeichert werden können. Durch diese Modifikation konnte das Signal-zu-Rausch Verhältnis deutlich verbessert werden. Nach Beendigung der Umbauten wurden in ausführlichen Laborstudien die einzelnen Instrumentenparameter detailliert charakterisiert. Neben den Spannungen die zur Fokussierung oder zur Speicherung der Ionen in der Ionenfalle dienen, wurden die unterschiedlichen Arten der resonanten Anregung, mittels der die Ionen in der Ionenfalle gezielt zu Schwingungen angeregt werden können, sehr genau untersucht. Durch eine gezielte Kombination der unterschiedlichen Arten der resonanten Anregung ist es möglich MSn-Studien durchzuführen. Nach erfolgreicher Charakterisierung konnte in weiteren Laborstudien die MSn-Fähigkeit des AIMS demonstriert werden. Für Tryptophan (C11H12N2O2) wurde anhand von MS4-Studien ausgehend von m/z 130 ein möglicher Fragmentierungsweg identifiziert. Für die einzelnen Stufen der MS4-Studien wurden die Nachweisgrenzen abgeschätzt. Im Rahmen der PARADE (PArticles and RAdicals: Diel observations of the impact of urban and biogenic Emissions) Messkampagne im August/September 2011 auf dem kleinen Feldberg in der Nähe von Frankfurt am Main wurde die Feldtauglichkeit des AIMS demonstriert. Die Nachweisgrenzen liegen für eine Mittelungszeit von 60 Minuten für Organik bei 1,4 µg m-3, für Nitrat bei 0,5 µg m-3 und für Sulfat bei 0,7 µg m-3, was ausreichend ist um atmosphärisches Aerosol messen zu können. Dies ist ein signifikanter Fortschritt im Vergleich zum Prototypen, der aufgrund schlechter Reproduzierbarkeit und Robustheit noch nicht feldtauglich war. Im Vergleich zum HR-ToF-AMS, einem Standard-Aerosolmassenspektrometer, zeigte sich, dass beide Instrumente vergleichbare Trends für die Spezies Nitrat, Sulfat und Organik messen.
Resumo:
In this work I reported recent results in the field of Statistical Mechanics of Equilibrium, and in particular in Spin Glass models and Monomer Dimer models . We start giving the mathematical background and the general formalism for Spin (Disordered) Models with some of their applications to physical and mathematical problems. Next we move on general aspects of the theory of spin glasses, in particular to the Sherrington-Kirkpatrick model which is of fundamental interest for the work. In Chapter 3, we introduce the Multi-species Sherrington-Kirkpatrick model (MSK), we prove the existence of the thermodynamical limit and the Guerra's Bound for the quenched pressure together with a detailed analysis of the annealed and the replica symmetric regime. The result is a multidimensional generalization of the Parisi's theory. Finally we brie y illustrate the strategy of the Panchenko's proof of the lower bound. In Chapter 4 we discuss the Aizenmann-Contucci and the Ghirlanda-Guerra identities for a wide class of Spin Glass models. As an example of application, we discuss the role of these identities in the proof of the lower bound. In Chapter 5 we introduce the basic mathematical formalism of Monomer Dimer models. We introduce a Gaussian representation of the partition function that will be fundamental in the rest of the work. In Chapter 6, we introduce an interacting Monomer-Dimer model. Its exact solution is derived and a detailed study of its analytical properties and related physical quantities is performed. In Chapter 7, we introduce a quenched randomness in the Monomer Dimer model and show that, under suitable conditions the pressure is a self averaging quantity. The main result is that, if we consider randomness only in the monomer activity, the model is exactly solvable.
Resumo:
The application of dexterous robotic hands out of research laboratories has been limited by the intrinsic complexity that these devices present. This is directly reflected as an economically unreasonable cost and a low overall reliability. Within the research reported in this thesis it is shown how the problem of complexity in the design of robotic hands can be tackled, taking advantage of modern technologies (i.e. rapid prototyping), leading to innovative concepts for the design of the mechanical structure, the actuation and sensory systems. The solutions adopted drastically reduce the prototyping and production costs and increase the reliability, reducing the number of parts required and averaging their single reliability factors. In order to get guidelines for the design process, the problem of robotic grasp and manipulation by a dual arm/hand system has been reviewed. In this way, the requirements that should be fulfilled at hardware level to guarantee successful execution of the task has been highlighted. The contribution of this research from the manipulation planning side focuses on the redundancy resolution that arise in the execution of the task in a dexterous arm/hand system. In literature the problem of coordination of arm and hand during manipulation of an object has been widely analyzed in theory but often experimentally demonstrated in simplified robotic setup. Our aim is to cover the lack in the study of this topic and experimentally evaluate it in a complex system as a anthropomorphic arm hand system.
Antarctic cloud spectral emission from ground-based measurements, a focus on far infrared signatures
Resumo:
The present work belongs to the PRANA project, the first extensive field campaign of observation of atmospheric emission spectra covering the Far InfraRed spectral region, for more than two years. The principal deployed instrument is REFIR-PAD, a Fourier transform spectrometer used by us to study Antarctic cloud properties. A dataset covering the whole 2013 has been analyzed and, firstly, a selection of good quality spectra is performed, using, as thresholds, radiance values in few chosen spectral regions. These spectra are described in a synthetic way averaging radiances in selected intervals, converting them into BTs and finally considering the differences between each pair of them. A supervised feature selection algorithm is implemented with the purpose to select the features really informative about the presence, the phase and the type of cloud. Hence, training and test sets are collected, by means of Lidar quick-looks. The supervised classification step of the overall monthly datasets is performed using a SVM. On the base of this classification and with the help of Lidar observations, 29 non-precipitating ice cloud case studies are selected. A single spectrum, or at most an average over two or three spectra, is processed by means of the retrieval algorithm RT-RET, exploiting some main IR window channels, in order to extract cloud properties. Retrieved effective radii and optical depths are analyzed, to compare them with literature studies and to evaluate possible seasonal trends. Finally, retrieval output atmospheric profiles are used as inputs for simulations, assuming two different crystal habits, with the aim to examine our ability to reproduce radiances in the FIR. Substantial mis-estimations are found for FIR micro-windows: a high variability is observed in the spectral pattern of simulation deviations from measured spectra and an effort to link these deviations to cloud parameters has been performed.