994 resultados para Simulator FTE DATCOM damage tolerance inspections algorithms
Resumo:
In recent decades, the rapid development of optical spectroscopy for tissue diagnosis has been indicative of its high clinical value. The goal of this research is to prove the feasibility of using diffuse reflectance spectroscopy and fluorescence spectroscopy to assess myocardial infarction (MI) in vivo. The proposed optical technique was designed to be an intra-operative guidance tool that can provide useful information about the condition of an infarct for surgeons and researchers. ^ In order to gain insight into the pathophysiological characteristics of an infarct, two novel spectral analysis algorithms were developed to interpret diffuse reflectance spectra. The algorithms were developed based on the unique absorption properties of hemoglobin for the purpose of retrieving regional hemoglobin oxygenation saturation and concentration data in tissue from diffuse reflectance spectra. The algorithms were evaluated and validated using simulated data and actual experimental data. ^ Finally, the hypothesis of the study was validated using a rabbit model of MI. The mechanism by which the MI was induced was the ligation of a major coronary artery of the left ventricle. Three to four weeks after the MI was induced, the extent of myocardial tissue injury and the evolution of the wound healing process were investigated using the proposed spectroscopic methodology as well as histology. The correlations between spectral alterations and histopathological features of the MI were analyzed statistically. ^ The results of this PhD study demonstrate the applicability of the proposed optical methodology for assessing myocardial tissue damage induced by MI in vivo. The results of the spectral analysis suggest that connective tissue proliferation induced by MI significantly alter the characteristics of diffuse reflectance and fluorescence spectra. The magnitudes of the alterations could be quantitatively related to the severity and extensiveness of connective tissue proliferation.^
Resumo:
Freeze events significantly influence landscape structure and community composition along subtropical coastlines. This is particularly true in south Florida, where such disturbances have historically contributed to patch diversity within the mangrove forest, and have played a part in limiting its inland transgression. With projected increases in mean global temperatures, such instances are likely to become much less frequent in the region, contributing to a reduction in heterogeneity within the mangrove forest itself. To understand the process more clearly, we explored the dynamics of a Dwarf mangrove forest following two chilling events that produced freeze-like symptoms, i.e., leaf browning, desiccation, and mortality, and interpreted the resulting changes within the context of current winter temperatures and projected future scenarios. Structural effects from a 1996 chilling event were dramatic, with mortality and tissue damage concentrated among individuals comprising the Dwarf forest's low canopy. This disturbance promoted understory plant development and provided an opportunity for Laguncularia racemosa to share dominance with Rhizophora mangle. Mortality due to the less severe 2001 event was greatest in the understory, probably because recovery of the protective canopy following the earlier freeze was still incomplete. Stand dynamics were static over the same period in nearby unimpacted sites. The probability of reaching temperatures as low as those recorded at a nearby meteorological station (≤3 °C) under several warming scenarios was simulated by applying 1° incremental temperature increases to a model developed from a 42-year temperature record. According to the model, the frequency of similar chilling events decreased from once every 1.9 years at present to once every 3.4 and 32.5 years with 1 and 4 °C warming, respectively. The large decrease in the frequency of these events would eliminate an important mechanism that maintains Dwarf forest structure, and promotes compositional diversity.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
From their early days, Electrical Submergible Pumping (ESP) units have excelled in lifting much greater liquid rates than most of the other types of artificial lift and developed by good performance in wells with high BSW, in onshore and offshore environments. For all artificial lift system, the lifetime and frequency of interventions are of paramount importance, given the high costs of rigs and equipment, plus the losses coming from a halt in production. In search of a better life of the system comes the need to work with the same efficiency and security within the limits of their equipment, this implies the need for periodic adjustments, monitoring and control. How is increasing the prospect of minimizing direct human actions, these adjustments should be made increasingly via automation. The automated system not only provides a longer life, but also greater control over the production of the well. The controller is the brain of most automation systems, it is inserted the logic and strategies in the work process in order to get you to work efficiently. So great is the importance of controlling for any automation system is expected that, with better understanding of ESP system and the development of research, many controllers will be proposed for this method of artificial lift. Once a controller is proposed, it must be tested and validated before they take it as efficient and functional. The use of a producing well or a test well could favor the completion of testing, but with the serious risk that flaws in the design of the controller were to cause damage to oil well equipment, many of them expensive. Given this reality, the main objective of the present work is to present an environment for evaluation of fuzzy controllers for wells equipped with ESP system, using a computer simulator representing a virtual oil well, a software design fuzzy controllers and a PLC. The use of the proposed environment will enable a reduction in time required for testing and adjustments to the controller and evaluated a rapid diagnosis of their efficiency and effectiveness. The control algorithms are implemented in both high-level language, through the controller design software, such as specific language for programming PLCs, Ladder Diagram language.
Resumo:
This work presents an optical non-contact technique to evaluate the fatigue damage state of CFRP structures measuring the irregularity factor of the surface. This factor includes information about surface topology and can be measured easily on field, by techniques such as optical perfilometers. The surface irregularity factor has been correlated with stiffness degradation, which is a well-accepted parameter for the evaluation of the fatigue damage state of composite materials. Constant amplitude fatigue loads (CAL) and realistic variable amplitude loads (VAL), representative of real in- flight conditions, have been applied to “dog bone” shaped tensile specimens. It has been shown that the measurement of the surface irregularity parameters can be applied to evaluate the damage state of a structure, and that it is independent of the type of fatigue load that has caused the damage. As a result, this measurement technique is applicable for a wide range of inspections of composite material structures, from pressurized tanks with constant amplitude loads, to variable amplitude loaded aeronautical structures such as wings and empennages, up to automotive and other industrial applications.
Resumo:
We propose a crack propagation algorithm which is independent of particular constitutive laws and specific element technology. It consists of a localization limiter in the form of the screened Poisson equation with local mesh refinement. This combination allows the cap- turing of strain localization with good resolution, even in the absence of a sufficiently fine initial mesh. In addition, crack paths are implicitly defined from the localized region, cir- cumventing the need for a specific direction criterion. Observed phenomena such as mul- tiple crack growth and shielding emerge naturally from the algorithm. In contrast with alternative regularization algorithms, curved cracks are correctly represented. A staggered scheme for standard equilibrium and screened equations is used. Element subdivision is based on edge split operations using a given constitutive quantity (either damage or void fraction). To assess the robustness and accuracy of this algorithm, we use both quasi-brittle benchmarks and ductile tests.
Resumo:
In the field of vibration qualification testing, with the popular Random Control mode of shakers, the specimen is excited by random vibrations typically set in the form of a Power Spectral Density (PSD). The corresponding signals are stationary and Gaussian, i.e. featuring a normal distribution. Conversely, real-life excitations are frequently non-Gaussian, exhibiting high peaks and/or burst signals and/or deterministic harmonic components. The so-called kurtosis is a parameter often used to statistically describe the occurrence and significance of high peak values in a random process. Since the similarity between test input profiles and real-life excitations is fundamental for qualification test reliability, some methods of kurtosis-control can be implemented to synthesize realistic (non-Gaussian) input signals. Durability tests are performed to check the resistance of a component to vibration-based fatigue damage. A procedure to synthesize test excitations which starts from measured data and preserves both the damage potential and the characteristics of the reference signals is desirable. The Fatigue Damage Spectrum (FDS) is generally used to quantify the fatigue damage potential associated with the excitation. The signal synthesized for accelerated durability tests (i.e. with a limited duration) must feature the same FDS as the reference vibration computed for the component’s expected lifetime. Current standard procedures are efficient in synthesizing signals in the form of a PSD, but prove inaccurate if reference data are non-Gaussian. This work presents novel algorithms for the synthesis of accelerated durability test profiles with prescribed FDS and a non-Gaussian distribution. An experimental campaign is conducted to validate the algorithms, by testing their accuracy, robustness, and practical effectiveness. Moreover, an original procedure is proposed for the estimation of the fatigue damage potential, aiming to minimize the computational time. The research is thus supposed to improve both the effectiveness and the efficiency of excitation profile synthesis for accelerated durability tests.
Resumo:
A densely built environment is a complex system of infrastructure, nature, and people closely interconnected and interacting. Vehicles, public transport, weather action, and sports activities constitute a manifold set of excitation and degradation sources for civil structures. In this context, operators should consider different factors in a holistic approach for assessing the structural health state. Vibration-based structural health monitoring (SHM) has demonstrated great potential as a decision-supporting tool to schedule maintenance interventions. However, most excitation sources are considered an issue for practical SHM applications since traditional methods are typically based on strict assumptions on input stationarity. Last-generation low-cost sensors present limitations related to a modest sensitivity and high noise floor compared to traditional instrumentation. If these devices are used for SHM in urban scenarios, short vibration recordings collected during high-intensity events and vehicle passage may be the only available datasets with a sufficient signal-to-noise ratio. While researchers have spent efforts to mitigate the effects of short-term phenomena in vibration-based SHM, the ultimate goal of this thesis is to exploit them and obtain valuable information on the structural health state. First, this thesis proposes strategies and algorithms for smart sensors operating individually or in a distributed computing framework to identify damage-sensitive features based on instantaneous modal parameters and influence lines. Ordinary traffic and people activities become essential sources of excitation, while human-powered vehicles, instrumented with smartphones, take the role of roving sensors in crowdsourced monitoring strategies. The technical and computational apparatus is optimized using in-memory computing technologies. Moreover, identifying additional local features can be particularly useful to support the damage assessment of complex structures. Thereby, smart coatings are studied to enable the self-sensing properties of ordinary structural elements. In this context, a machine-learning-aided tomography method is proposed to interpret the data provided by a nanocomposite paint interrogated electrically.
Resumo:
Non Destructive Testing (NDT) and Structural Health Monitoring (SHM) are becoming essential in many application contexts, e.g. civil, industrial, aerospace etc., to reduce structures maintenance costs and improve safety. Conventional inspection methods typically exploit bulky and expensive instruments and rely on highly demanding signal processing techniques. The pressing need to overcome these limitations is the common thread that guided the work presented in this Thesis. In the first part, a scalable, low-cost and multi-sensors smart sensor network is introduced. The capability of this technology to carry out accurate modal analysis on structures undergoing flexural vibrations has been validated by means of two experimental campaigns. Then, the suitability of low-cost piezoelectric disks in modal analysis has been demonstrated. To enable the use of this kind of sensing technology in such non conventional applications, ad hoc data merging algorithms have been developed. In the second part, instead, imaging algorithms for Lamb waves inspection (namely DMAS and DS-DMAS) have been implemented and validated. Results show that DMAS outperforms the canonical Delay and Sum (DAS) approach in terms of image resolution and contrast. Similarly, DS-DMAS can achieve better results than both DMAS and DAS by suppressing artefacts and noise. To exploit the full potential of these procedures, accurate group velocity estimations are required. Thus, novel wavefield analysis tools that can address the estimation of the dispersion curves from SLDV acquisitions have been investigated. An image segmentation technique (called DRLSE) was exploited in the k-space to draw out the wavenumber profile. The DRLSE method was compared with compressive sensing methods to extract the group and phase velocity information. The validation, performed on three different carbon fibre plates, showed that the proposed solutions can accurately determine the wavenumber and velocities in polar coordinates at multiple excitation frequencies.
Resumo:
This thesis deals with efficient solution of optimization problems of practical interest. The first part of the thesis deals with bin packing problems. The bin packing problem (BPP) is one of the oldest and most fundamental combinatorial optimiza- tion problems. The bin packing problem and its generalizations arise often in real-world ap- plications, from manufacturing industry, logistics and transportation of goods, and scheduling. After an introductory chapter, I will present two applications of two of the most natural extensions of the bin packing: Chapter 2 will be dedicated to an application of bin packing in two dimension to a problem of scheduling a set of computational tasks on a computer cluster, while Chapter 3 deals with the generalization of BPP in three dimensions that arise frequently in logistic and transportation, often com- plemented with additional constraints on the placement of items and characteristics of the solution, like, for example, guarantees on the stability of the items, to avoid potential damage to the transported goods, on the distribution of the total weight of the bins, and on compatibility with loading and unloading operations. The second part of the thesis, and in particular Chapter 4 considers the Trans- mission Expansion Problem (TEP), where an electrical transmission grid must be expanded so as to satisfy future energy demand at the minimum cost, while main- taining some guarantees of robustness to potential line failures. These problems are gaining importance in a world where a shift towards renewable energy can impose a significant geographical reallocation of generation capacities, resulting in the ne- cessity of expanding current power transmission grids.
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.
Resumo:
Lutein (LT) is a carotenoid obtained by diet and despite its antioxidant activity had been biochemically reported, few studies are available concerning its influence on the expression of antioxidant genes. The expression of 84 genes implicated in antioxidant defense was quantified using quantitative reverse transcription polymerase chain reaction array. DNA damage was measured by comet assay and glutathione (GSH) and thiobarbituric acid reactive substances (TBARS) were quantified as biochemical parameters of oxidative stress in mouse kidney and liver. cDDP treatment reduced concentration of GSH and increased TBARS, parameters that were ameliorated in treatment associated with LT. cDDP altered the expression of 32 genes, increasing the expression of GPx2, APC, Nqo1 and CCs. LT changed the expression of 37 genes with an induction of 13 mainly oxygen transporters. In treatments associating cDDP and LT, 30 genes had their expression changed with a increase of the same genes of the cDDP treatment alone. These results suggest that LT might act scavenging reactive species and also inducing the expression of genes related to a better antioxidant response, highlighting the improvement of oxygen transport. This improved redox state of the cell through LT treatment could be related to the antigenotoxic and antioxidant effects observed.
Resumo:
TiO2 and TiO2/WO3 electrodes, irradiated by a solar simulator in configurations for heterogeneous photocatalysis (HP) and electrochemically-assisted HP (EHP), were used to remediate aqueous solutions containing 10 mg L(-1) (34 μmol L(-1)) of 17-α-ethinylestradiol (EE2), active component of most oral contraceptives. The photocatalysts consisted of 4.5 μm thick porous films of TiO2 and TiO2/WO3 (molar ratio W/Ti of 12%) deposited on transparent electrodes from aqueous suspensions of TiO2 particles and WO3 precursors, followed by thermal treatment at 450 (°)C. First, an energy diagram was organized with photoelectrochemical and UV-Vis absorption spectroscopy data and revealed that EE2 could be directly oxidized by the photogenerated holes at the semiconductor surfaces, considering the relative HOMO level for EE2 and the semiconductor valence band edges. Also, for the irradiated hybrid photocatalyst, electrons in TiO2 should be transferred to WO3 conduction band, while holes move toward TiO2 valence band, improving charge separation. The remediated EE2 solutions were analyzed by fluorescence, HPLC and total organic carbon measurements. As expected from the energy diagram, both photocatalysts promoted the EE2 oxidation in HP configuration; after 4 h, the EE2 concentration decayed to 6.2 mg L(-1) (35% of EE2 removal) with irradiated TiO2 while TiO2/WO3 electrode resulted in 45% EE2 removal. A higher performance was achieved in EHP systems, when a Pt wire was introduced as a counter-electrode and the photoelectrodes were biased at +0.7 V; then, the EE2 removal corresponded to 48 and 54% for the TiO2 and TiO2/WO3, respectively. The hybrid TiO2/WO3, when compared to TiO2 electrode, exhibited enhanced sunlight harvesting and improved separation of photogenerated charge carriers, resulting in higher performance for removing this contaminant of emerging concern from aqueous solution.
Resumo:
It is well known that long term use of shampoo causes damage to human hair. Although the Lowry method has been widely used to quantify hair damage, it is unsuitable to determine this in the presence of some surfactants and there is no other method proposed in literature. In this work, a different method is used to investigate and compare the hair damage induced by four types of surfactants (including three commercial-grade surfactants) and water. Hair samples were immersed in aqueous solution of surfactants under conditions that resemble a shower (38 °C, constant shaking). These solutions become colored with time of contact with hair and its UV-vis spectra were recorded. For comparison, the amount of extracted proteins from hair by sodium dodecyl sulfate (SDS) and by water were estimated by the Lowry method. Additionally, non-pigmented vs. pigmented hair and also sepia melanin were used to understand the washing solution color and their spectra. The results presented herein show that hair degradation is mostly caused by the extraction of proteins, cuticle fragments and melanin granules from hair fiber. It was found that the intensity of solution color varies with the charge density of the surfactants. Furthermore, the intensity of solution color can be correlated to the amount of proteins quantified by the Lowry method as well as to the degree of hair damage. UV-vis spectrum of hair washing solutions is a simple and straightforward method to quantify and compare hair damages induced by different commercial surfactants.