933 resultados para Statistical factora analysis
Resumo:
Objectives: Cardiac surgery (CC) determines systemic and pulmonary changes that require special care. What motivated several studies conducted in healthy subjects to assess muscle strength were the awareness of the importance of respiratory muscle dysfunction in the development of respiratory failure. These studies used maximal inspiratory pressure (MIP) and maximal expiratory pressure (MEP) values. This study examined the concordance between the values predicted by the equations proposed by Black & Hyatt and Neder, and the measured values in cardiac surgery (CS) patients. Methods: Data were collected from preoperative evaluation forms. The Lin coefficient and Bland-Altman plots were used for statistical concordance analysis. The multiple linear regression and analysis of variance (ANOVA) were used to produce new formulas. Results: There were weak correlations of 0.22 and 0.19 in the MIP analysis and of 0.10 and 0.32 in the MEP analysis, for the formulas of Black & Hyatt and Neder, respectively. The ANOVA for both MIP and MEP were significant (P <0.0001), and the following formulas were developed: MIP = 88.82 - (0.51 x age) + (19.86 x gender), and MEP = 91.36 -(030 x age) + (29.92 x gender). Conclusions: The Black and Hyatt and Neder formulas predict highly discrepant values of MIP and MEP and should not be used to identify muscle weakness in CS patients.
Resumo:
Produced water in oil fields is one of the main sources of wastewater generated in the industry. It contains several organic compounds, such as benzene, toluene, ethyl benzene and xylene (BTEX), whose disposal is regulated by law. The aim of this study is to investigate a treatment of produced water integrating two processes, i.e., induced air flotation (IAF) and photo-Fenton. The experiments were conducted in a column flotation and annular lamp reactor for flotation and photodegradation steps, respectively. The first order kinetic constant of IAF for the wastewater studied was determined to be 0.1765 min(-1) for the surfactant EO 7. Degradation efficiencies of organic loading were assessed using factorial planning. Statistical data analysis shows that H2O2 concentration is a determining factor in process efficiency. Degradations above 90% were reached in all cases after 90 min of reaction, attaining 100% mineralization in the optimized concentrations of Fenton reagents. Process integration was adequate with 100% organic load removal in 20 min. The results of the integration of the IAF with the photo-Fenton allowed to meet the effluent limits established by Brazilian legislation for disposal. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Hochgeladene semiflexible kationische und anionische Polyelektrolyte wurden mit niedermolekularen Tensiden zu Polyelektrolyt-Tensid-Komplexen (PETK) umgesetzt und in organischen Lösungsmitteln mit Streumethoden und Rasterkraftmikroskopie charakterisiert. Die synthetisierten PETK wurden anschließend, mit dem Ziel einer strukturkontrollierten Komplexbildung, für die Bildung von Interpolyelektrolytkomplexen (IPEK) in organischen Lösungsmitteln verwendet und anhand ihres Komplexbildungsverhaltens mit wässrigen Systemen verglichen. Die Umsetzung von zylindrischen Polymerbürsten mit Poly(styrolsulfonat)-, bzw. Poly(2-vinylpyridinium)-Seitenketten mit entgegengesetzt geladenen Tensiden verlief, trotz einer Graftingdichte von eins, quantitativ. Mit Streumethoden konnte gezeigt werden, dass die gebildeten PETK in Lösung als molekulare Zylinder vorliegen. Die Synthese von pUC19-DNA-Tensidkomplexen (DNA-TK), die sich in Alkoholen gut lösen, ist nur in stark basischer Lösung gelungen. Während der Charakterisierung der DNA-TK mit Streumethoden zeigte sich eine starke Abhängigkeit des Trägheitsradius von dem Verhältnis DNA-/Salz+. Die Bildung von IPEK aus hochgeladenen Polyelektrolyt-Bürsten bzw. PETK-Bürsten wurde an verschiedenen Beispielen in Wasser und DMF durchgeführt und mit Streumethoden verfolgt. Alle Systeme zeigten ein zu der IPEK-Bildung von linearen Polyelektrolyten analoges Komplexbildungsverhalten. Bei der Komplexierung von Poly(styrolsulfonat)-Bürsten-Tensidkomplexen mit kommerziellen Polyamidoamin-G5-Dendrimeren (PAMAM) oder Poly(ethylenoxid) modifizierten Poly(ethylenimin)-Bürsten hingegen wurden über den gesamten Gewichtsbruchbereich mit Streumethoden und AFM zylindrische Aggregate gefunden, die den Dimensionen der Poly(styrolsulfonat)-Bürsten-Tensidkomplexe entsprechen. Durch statistische Höhenanalyse der AFM-Bilder wurde ein linearer Zusammenhang zwischen der Komplexhöhe und dem Gewichtsbruch an PAMAM, bzw. PEI-PEO gefunden, der auf die Zunahme der Molmasse der Komplexe durch Wachstum entlang des Zylinderdurchmessers hindeutet. Die Bildung von Aggregaten, mit mehr als einem Polyanion, wurde nicht beobachtet.
Resumo:
This work is about the role that environment plays in the production of evolutionary significant variations. It starts with an historical introduction about the concept of variation and the role of environment in its production. Then, I show how a lack of attention to these topics may lead to serious mistakes in data interpretation. A statistical re-analysis of published data on the effects of malnutrition on dental eruption, shows that what has been interpreted as an increase in the mean value, is actually linked to increase of variability. In Chapter 3 I present the topic of development as a link between variability and environmental influence, giving a review of the possible mechanisms by which development influences evolutionary dynamics. Chapter 4 is the core chapter of the thesis; I investigated the role of environment in the development of dental morphology. I used dental hypoplasia as a marker of stress, characterizing two groups. Comparing the morphology of upper molars in the two groups, three major results came out: (i) there is a significant effect of environmental stressors on the overall morphology of upper molars; (ii) the developmental response increases morphological variability of the stressed population; (iii) increase of variability is directional: stressed individuals have increased cusps dimensions and number. I also hypothesized the molecular mechanisms that could be responsible of the observed effects. In Chapter 5, I present future perspectives for developing this research. The direction of dental development response is the same direction of the trend in mammalian dental evolution. Since malnutrition triggers the developmental response, and this particular kind of stressor must have been very common in our class evolutionary history, I propose the possibility that environmental stress actively influenced mammals evolution. Moreover, I discuss the possibility of reconsidering the role of natural selection in the evolution of dental morphology.
Resumo:
Training can change the functional and structural organization of the brain, and animal models demonstrate that the hippocampus formation is particularly susceptible to training-related neuroplasticity. In humans, however, direct evidence for functional plasticity of the adult hippocampus induced by training is still missing. Here, we used musicians' brains as a model to test for plastic capabilities of the adult human hippocampus. By using functional magnetic resonance imaging optimized for the investigation of auditory processing, we examined brain responses induced by temporal novelty in otherwise isochronous sound patterns in musicians and musical laypersons, since the hippocampus has been suggested previously to be crucially involved in various forms of novelty detection. In the first cross-sectional experiment, we identified enhanced neural responses to temporal novelty in the anterior left hippocampus of professional musicians, pointing to expertise-related differences in hippocampal processing. In the second experiment, we evaluated neural responses to acoustic temporal novelty in a longitudinal approach to disentangle training-related changes from predispositional factors. For this purpose, we examined an independent sample of music academy students before and after two semesters of intensive aural skills training. After this training period, hippocampal responses to temporal novelty in sounds were enhanced in musical students, and statistical interaction analysis of brain activity changes over time suggests training rather than predisposition effects. Thus, our results provide direct evidence for functional changes of the adult hippocampus in humans related to musical training.
Resumo:
Background—Pathology studies on fatal cases of very late stent thrombosis have described incomplete neointimal coverage as common substrate, in some cases appearing at side-branch struts. Intravascular ultrasound studies have described the association between incomplete stent apposition (ISA) and stent thrombosis, but the mechanism explaining this association remains unclear. Whether the neointimal coverage of nonapposed side-branch and ISA struts is delayed with respect to well-apposed struts is unknown. Methods and Results—Optical coherence tomography studies from 178 stents implanted in 99 patients from 2 randomized trials were analyzed at 9 to 13 months of follow-up. The sample included 38 sirolimus-eluting, 33 biolimus-eluting, 57 everolimus-eluting, and 50 zotarolimus-eluting stents. Optical coherence tomography coverage of nonapposed side-branch and ISA struts was compared with well-apposed struts of the same stent by statistical pooled analysis with a random-effects model. A total of 34 120 struts were analyzed. The risk ratio of delayed coverage was 9.00 (95% confidence interval, 6.58 to 12.32) for nonapposed side-branch versus well-apposed struts, 9.10 (95% confidence interval, 7.34 to 11.28) for ISA versus well-apposed struts, and 1.73 (95% confidence interval, 1.34 to 2.23) for ISA versus nonapposed side-branch struts. Heterogeneity of the effect was observed in the comparison of ISA versus well-apposed struts (H=1.27; I2=38.40) but not in the other comparisons. Conclusions—Coverage of ISA and nonapposed side-branch struts is delayed with respect to well-apposed struts in drug-eluting stents, as assessed by optical coherence tomography.
Resumo:
OBJECTIVES: To assess the microbiological outcome of local administration of minocycline hydrochloride microspheres 1 mg (Arestin) in cases with peri-implantitis and with a follow-up period of 12 months. MATERIAL AND METHODS: After debridement, and local administration of chlorhexidine gel, peri-implantitis cases were treated with local administration of minocycline microspheres (Arestin). The DNA-DNA checkerboard hybridization method was used to detect bacterial presence during the first 360 days of therapy. RESULTS: At Day 10, lower bacterial loads for 6/40 individual bacteria including Actinomyces gerensceriae (P<0.1), Actinomyces israelii (P<0.01), Actinomyces naeslundi type 1 (P<0.01) and type 2 (P<0.03), Actinomyces odontolyticus (P<0.01), Porphyromonas gingivalis (P<0.01) and Treponema socranskii (P<0.01) were found. At Day 360 only the levels of Actinobacillus actinomycetemcomitans were lower than at baseline (mean difference: 1x10(5); SE difference: 0.34x10(5), 95% CI: 0.2x10(5) to 1.2x10(5); P<0.03). Six implants were lost between Days 90 and 270. The microbiota was successfully controlled in 48%, and with definitive failures (implant loss and major increase in bacterial levels) in 32% of subjects. CONCLUSIONS: At study endpoint, the impact of Arestin on A. actinomycetemcomitans was greater than the impact on other pathogens. Up to Day 180 reductions in levels of Tannerella forsythia, P. gingivalis, and Treponema denticola were also found. Failures in treatment could not be associated with the presence of specific pathogens or by the total bacterial load at baseline. Statistical power analysis suggested that a case control study would require approximately 200 subjects.
Resumo:
BACKGROUND Histologic experimental studies have reported incomplete neointimal healing in overlapping with respect to nonoverlapping segments in drug-eluting stents (DESs), but these observations have not been confirmed in human coronary arteries hitherto. On the contrary, angiographic and optical coherence tomography studies suggest that DES overlap elicits rather an exaggerated than an incomplete neointimal reaction. METHODS Optical coherence tomography studies from 2 randomized trials including sirolimus-eluting, biolimus-eluting, everolimus-eluting, and zotarolimus-eluting stents were analyzed at 9- to 13-month follow-up. Coverage in overlapping segments was compared versus the corresponding nonoverlapping segments of the same stents, using statistical pooled analysis. RESULTS Forty-two overlaps were found in 31 patients: 11 in sirolimus-eluting stents, 3 in biolimus-eluting stents, 17 in everolimus-eluting stents, and 11 in zotarolimus-eluting stents. The risk ratio of incomplete coverage was 2.35 (95% CI 1.86-2.98) in overlapping versus nonoverlapping segments. Thickness of coverage in overlaps was only 85% (95% CI 81%-90%) of the thickness in nonoverlaps. Significant heterogeneity of the effect was observed, especially pronounced in the comparison of thickness of coverage (I(2) = 90.31). CONCLUSIONS The effect of overlapping DES on neointimal inhibition is markedly heterogeneous: on average, DES overlap is associated with more incomplete and thinner coverage, but in some cases, the overlap elicits an exaggerated neointimal reaction, thicker than in the corresponding nonoverlapping segments. These results might help to understand why overlapping DES is associated with worse clinical outcomes, both in terms of thrombotic phenomena and in terms of restenosis and revascularization.
Resumo:
The Vernagtferner region has a long tradition of glaciological research performed by groups from Munich. It started in 1889, when Prof. Sebastian Finsterwalder from the Technical University in Munich produced the first map of a complete glacier based on terrestrial photogrammetry. Since then, numerous maps of the glacier have been made, describing the change in surface elevation for more than a century. These maps form the basis of the geodetic method of glacier mass balance determination, which provides volume changes as average data for the period between two surveys, i.e. typically for 10 years. Since the start of the glaciological method on Vernagtferner in 1964, annual as well as winter and summer mass balance data are available continuously. But only since 1973, the construction of the Vernagtbach station, approximately 1 km below the glacier margin at that time, provided the means to record a larger number of hydrological and meteorological parameters with a temporal resolution of typically 1 hour.
Resumo:
Laser ablation inductively coupled plasma-mass spectrometry microanalysis of fossil and live Globigerinoides ruber from the eastern Indian Ocean reveals large variations of Mg/Ca composition both within and between individual tests from core top or plankton pump samples. Although the extent of intertest and intratest compositional variability exceeds that attributable to calcification temperature, the pooled mean Mg/Ca molar values obtained for core top samples between the equator and >30°S form a strong exponential correlation with mean annual sea surface temperature (Mg/Ca mmol/mol = 0.52 exp**0.076SST°C, r**2 = 0.99). The intertest Mg/Ca variability within these deep-sea core top samples is a source of significant uncertainty in Mg/Ca seawater temperature estimates and is notable for being site specific. Our results indicate that widely assumed uncertainties in Mg/Ca thermometry may be underestimated. We show that statistical power analysis can be used to evaluate the number of tests needed to achieve a target level of uncertainty on a sample by sample case. A varying bias also arises from the presence and varying mix of two morphotypes (G. ruber ruber and G. ruber pyramidalis), which have different mean Mg/Ca values. Estimated calcification temperature differences between these morphotypes range up to 5°C and are notable for correlating with the seasonal range in seawater temperature at different sites.
Resumo:
AnewRelativisticScreenedHydrogenicModel has been developed to calculate atomic data needed to compute the optical and thermodynamic properties of high energy density plasmas. The model is based on anewset of universal screeningconstants, including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies. This database was built with energies compiled from the National Institute of Standards and Technology (NIST) database of experimental atomic energy levels, and energies calculated with the Flexible Atomic Code (FAC). The screeningconstants have been computed up to the 5p3/2 subshell using a Genetic Algorithm technique with an objective function designed to minimize both the relative error and the maximum error. To select the best set of screeningconstants some additional physical criteria has been applied, which are based on the reproduction of the filling order of the shells and on obtaining the best ground state configuration. A statistical error analysis has been performed to test the model, which indicated that approximately 88% of the data lie within a ±10% error interval. We validate the model by comparing the results with ionization energies, transition energies, and wave functions computed using sophisticated self-consistent codes and experimental data.
Resumo:
El tema central de investigación en esta Tesis es el estudio del comportamientodinámico de una estructura mediante modelos que describen la distribución deenergía entre los componentes de la misma y la aplicación de estos modelos parala detección de daños incipientes.Los ensayos dinámicos son un modo de extraer información sobre las propiedadesde una estructura. Si tenemos un modelo de la estructura se podría ajustar éstepara que, con determinado grado de precisión, tenga la misma respuesta que elsistema real ensayado. Después de que se produjese un daño en la estructura,la respuesta al mismo ensayo variará en cierta medida; actualizando el modelo alas nuevas condiciones podemos detectar cambios en la configuración del modeloestructural que nos condujeran a la conclusión de que en la estructura se haproducido un daño.De este modo, la detección de un daño incipiente es posible si somos capacesde distinguir una pequeña variación en los parámetros que definen el modelo. Unrégimen muy apropiado para realizar este tipo de detección es a altas frecuencias,ya que la respuesta es muy dependiente de los pequeños detalles geométricos,dado que el tamaño característico en la estructura asociado a la respuesta esdirectamente proporcional a la velocidad de propagación de las ondas acústicas enel sólido, que para una estructura dada es inalterable, e inversamente proporcionala la frecuencia de la excitación. Al mismo tiempo, esta característica de la respuestaa altas frecuencias hace que un modelo de Elementos Finitos no sea aplicable enla práctica, debido al alto coste computacional.Un modelo ampliamente utilizado en el cálculo de la respuesta de estructurasa altas frecuencias en ingeniería es el SEA (Statistical Energy Analysis). El SEAaplica el balance energético a cada componente estructural, relacionando la energíade vibración de estos con la potencia disipada por cada uno de ellos y la potenciatransmitida entre ellos, cuya suma debe ser igual a la potencia inyectada a cadacomponente estructural. Esta relación es lineal y viene caracterizada por los factoresde pérdidas. Las magnitudes que intervienen en la respuesta se consideranpromediadas en la geometría, la frecuencia y el tiempo.Actualizar el modelo SEA a datos de ensayo es, por lo tanto, calcular losfactores de pérdidas que reproduzcan la respuesta obtenida en éste. Esta actualización,si se hace de manera directa, supone la resolución de un problema inversoque tiene la característica de estar mal condicionado. En la Tesis se propone actualizarel modelo SEA, no en término de los factores de pérdidas, sino en términos deparámetros estructurales que tienen sentido físico cuando se trata de la respuestaa altas frecuencias, como son los factores de disipación de cada componente, susdensidades modales y las rigideces características de los elementos de acoplamiento.Los factores de pérdidas se calculan como función de estos parámetros. Estaformulación es desarrollada de manera original en esta Tesis y principalmente sefunda en la hipótesis de alta densidad modal, es decir, que en la respuesta participanun gran número de modos de cada componente estructural.La teoría general del método SEA, establece que el modelo es válido bajounas hipótesis sobre la naturaleza de las excitaciones externas muy restrictivas,como que éstas deben ser de tipo ruido blanco local. Este tipo de carga es difícil dereproducir en condiciones de ensayo. En la Tesis mostramos con casos prácticos queesta restricción se puede relajar y, en particular, los resultados son suficientementebuenos cuando la estructura se somete a una carga armónica en escalón.Bajo estas aproximaciones se desarrolla un algoritmo de optimización por pasosque permite actualizar un modelo SEA a un ensayo transitorio cuando la carga esde tipo armónica en escalón. Este algoritmo actualiza el modelo no solamente parauna banda de frecuencia en particular sino para diversas bandas de frecuencia demanera simultánea, con el objetivo de plantear un problema mejor condicionado.Por último, se define un índice de daño que mide el cambio en la matriz depérdidas cuando se produce un daño estructural en una localización concreta deun componente. Se simula numéricamente la respuesta de una estructura formadapor vigas donde producimos un daño en la sección de una de ellas; como se tratade un cálculo a altas frecuencias, la simulación se hace mediante el Método delos Elementos Espectrales para lo que ha sido necesario desarrollar dentro de laTesis un elemento espectral de tipo viga dañada en una sección determinada. Losresultados obtenidos permiten localizar el componente estructural en que se haproducido el daño y la sección en que éste se encuentra con determinado grado deconfianza.AbstractThe main subject under research in this Thesis is the study of the dynamic behaviourof a structure using models that describe the energy distribution betweenthe components of the structure and the applicability of these models to incipientdamage detection.Dynamic tests are a way to extract information about the properties of astructure. If we have a model of the structure, it can be updated in order toreproduce the same response as in experimental tests, within a certain degree ofaccuracy. After damage occurs, the response will change to some extent; modelupdating to the new test conditions can help to detect changes in the structuralmodel leading to the conclusión that damage has occurred.In this way incipient damage detection is possible if we are able to detect srnallvariations in the model parameters. It turns out that the high frequency regimeis highly relevant for incipient damage detection, because the response is verysensitive to small structural geometric details. The characteristic length associatedwith the response is proportional to the propagation speed of acoustic waves insidethe solid, but inversely proportional to the excitation frequency. At the same time,this fact makes the application of a Finite Element Method impractical due to thehigh computational cost.A widely used model in engineering when dealing with the high frequencyresponse is SEA (Statistical Energy Analysis). SEA applies the energy balance toeach structural component, relating their vibrational energy with the dissipatedpower and the transmitted power between the different components; their summust be equal to the input power to each of them. This relationship is linear andcharacterized by loss factors. The magnitudes considered in the response shouldbe averaged in geometry, frequency and time.SEA model updating to test data is equivalent to calculating the loss factorsthat provide a better fit to the experimental response. This is formulated as an illconditionedinverse problem. In this Thesis a new updating algorithm is proposedfor the study of the high frequency response regime in terms of parameters withphysical meaning such as the internal dissipation factors, modal densities andcharacteristic coupling stiffness. The loss factors are then calculated from theseparameters. The approach is developed entirely in this Thesis and is mainlybased on a high modal density asumption, that is to say, a large number of modescontributes to the response.General SEA theory establishes the validity of the model under the asumptionof very restrictive external excitations. These should behave as a local white noise.This kind of excitation is difficult to reproduce in an experimental environment.In this Thesis we show that in practical cases this assumption can be relaxed, inparticular, results are good enough when the structure is excited with a harmonicstep function.Under these assumptions an optimization algorithm is developed for SEAmodel updating to a transient test when external loads are harmonic step functions.This algorithm considers the response not only in a single frequency band,but also for several of them simultaneously.A damage index is defined that measures the change in the loss factor matrixwhen a damage has occurred at a certain location in the structure. The structuresconsidered in this study are built with damaged beam elements; as we are dealingwith the high frequency response, the numerical simulation is implemented witha Spectral Element Method. It has therefore been necessary to develop a spectralbeam damaged element as well. The reported results show that damage detectionis possible with this algorithm, moreover, damage location is also possible withina certain degree of accuracy.
Resumo:
Diluted nitride self-assembled In(Ga)AsN quantum dots (QDs) grown on GaAs substrates are potential candidates to emit in the windows of maximum transmittance for optical fibres (1.3-1.55 μm). In this paper, we analyse the effect of nitrogen addition on the indium desorption occurring during the capping process of InxGa1−xAs QDs (x = l and 0.7). The samples have been grown by molecular beam epitaxy and studied through transmission electron microscopy (TEM) and photoluminescence techniques. The composition distribution inside the dots was determined by statistical moiré analysis and measured by energy dispersive X-ray spectroscopy. First, the addition of nitrogen in In(Ga)As QDs gave rise to a strong redshift in the emission peak, together with a large loss of intensity and monochromaticity. Moreover, these samples showed changes in the QDs morphology as well as an increase in the density of defects. The statistical compositional analysis displayed a normal distribution in InAs QDs with an average In content of 0.7. Nevertheless, the addition of Ga and/or N leads to a bimodal distribution of the Indium content with two separated QD populations. We suggest that the nitrogen incorporation enhances the indium fixation inside the QDs where the indium/gallium ratio plays an important role in this process. The strong redshift observed in the PL should be explained not only by the N incorporation but also by the higher In content inside the QDs
Resumo:
This work focuses on the analysis of a structural element of MetOP-A satellite. Given the special interest in the influence of equipment installed on structural elements, the paper studies one of the lateral faces on which the Advanced SCATterometer (ASCAT) is installed. The work is oriented towards the modal characterization of the specimen, describing the experimental set-up and the application of results to the development of a Finite Element Method (FEM) model to study the vibro-acoustic response. For the high frequency range, characterized by a high modal density, a Statistical Energy Analysis (SEA) model is considered, and the FEM model is used when modal density is low. The methodology for developing the SEA model and a compound FEM and Boundary Element Method (BEM) model to provide continuity in the medium frequency range is presented, as well as the necessary updating, characterization and coupling between models required to achieve numerical models that match experimental results.