928 resultados para structure, analysis, modeling
Resumo:
Over the last decades, calibration techniques have been widely used to improve the accuracy of robots and machine tools since they only involve software modification instead of changing the design and manufacture of the hardware. Traditionally, there are four steps are required for a calibration, i.e. error modeling, measurement, parameter identification and compensation. The objective of this thesis is to propose a method for the kinematics analysis and error modeling of a newly developed hybrid redundant robot IWR (Intersector Welding Robot), which possesses ten degrees of freedom (DOF) where 6-DOF in parallel and additional 4-DOF in serial. In this article, the problem of kinematics modeling and error modeling of the proposed IWR robot are discussed. Based on the vector arithmetic method, the kinematics model and the sensitivity model of the end-effector subject to the structure parameters is derived and analyzed. The relations between the pose (position and orientation) accuracy and manufacturing tolerances, actuation errors, and connection errors are formulated. Computer simulation is performed to examine the validity and effectiveness of the proposed method.
Resumo:
In many engineering applications, compliant piping systems conveying liquids are subjected to inelastic deformations due to severe pressure surges such as plastic tubes in modern water supply transmission lines and metallic pipings in nuclear power plants. In these cases the design of such systems may require an adequate modeling of the interactions between the fluid dynamics and the inelastic structural pipe motions. The reliability of the prediction of fluid-pipe behavior depends mainly on the adequacy of the constitutive equations employed in the analysis. In this paper it is proposed a systematic and general approach to consistently incorporate different kinds of inelastic behaviors of the pipe material in a fluid-structure interaction analysis. The main feature of the constitutive equations considered in this work is that a very simple numerical technique can be used for solving the coupled equations describing the dynamics of the fluid and pipe wall. Numerical examples concerning the analysis of polyethylene and stainless steel pipe networks are presented to illustrate the versatility of the proposed approach.
Resumo:
Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
Polycondensation of 2,6-dihydroxynaphthalene with 4,4'-bis(4"-fluorobenzoyl)biphenyl affords a novel, semicrystalline poly(ether ketone) with a melting point of 406 degreesC and glass transition temperature (onset) of 168 degreesC. Molecular modeling and diffraction-simulation studies of this polymer, coupled with data from the single-crystal structure of an oligomer model, have enabled the crystal and molecular structure of the polymer to be determined from X-ray powder data. This structure-the first for any naphthalene-containing poly(ether ketone)-is fully ordered, in monoclinic space group P2(1)/b, with two chains per unit cell. Rietveld refinement against the experimental powder data gave a final agreement factor (R-wp) of 6.7%.
Resumo:
Human parasitic diseases are the foremost threat to human health and welfare around the world. Trypanosomiasis is a very serious infectious disease against which the currently available drugs are limited and not effective. Therefore, there is an urgent need for new chemotherapeutic agents. One attractive drug target is the major cysteine protease from Trypanosoma cruzi, cruzain. In the present work, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) studies were conducted on a series of thiosemicarbazone and semicarbazone derivatives as inhibitors of cruzain. Molecular modeling studies were performed in order to identify the preferred binding mode of the inhibitors into the enzyme active site, and to generate structural alignments for the three-dimensional quantitative structure-activity relationship (3D QSAR) investigations. Statistically significant models were obtained (CoMFA. r(2) = 0.96 and q(2) = 0.78; CoMSIA, r(2) = 0.91 and q(2) = 0.73), indicating their predictive ability for untested compounds. The models were externally validated employing a test set, and the predicted values were in good agreement with the experimental results. The final QSAR models and the information gathered from the 3D CoMFA and CoMSIA contour maps provided important insights into the chemical and structural basis involved in the molecular recognition process of this family of cruzain inhibitors, and should be useful for the design of new structurally related analogs with improved potency. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Eukaryotic translation initiation factor 5A (eIF5A) is a protein that is highly conserved and essential for cell viability. This factor is the only protein known to contain the unique and essential amino acid residue hypusine. This work focused on the structural and functional characterization of Saccharomyces cerevisiae eIF5A. The tertiary structure of yeast eIF5A was modeled based on the structure of its Leishmania mexicana homologue and this model was used to predict the structural localization of new site-directed and randomly generated mutations. Most of the 40 new mutants exhibited phenotypes that resulted from eIF-5A protein-folding defects. Our data provided evidence that the C-terminal alpha-helix present in yeast eIF5A is an essential structural element, whereas the eIF5A N-terminal 10 amino acid extension not present in archaeal eIF5A homologs, is not. Moreover, the mutants containing substitutions at or in the vicinity of the hypusine modification site displayed nonviable or temperature-sensitive phenotypes and were defective in hypusine modification. Interestingly, two of the temperature-sensitive strains produced stable mutant eIF5A proteins - eIF5A(K56A) and eIF5A(Q22H,L93F)- and showed defects in protein synthesis at the restrictive temperature. Our data revealed important structural features of eIF5A that are required for its vital role in cell viability and underscored an essential function of eIF5A in the translation step of gene expression.
Resumo:
Eukaryotic translation initiation factor 5A (eIF5A) is a protein that is highly conserved and essential for cell viability. This factor is the only protein known to contain the unique and essential amino acid residue hypusine. This work focused on the structural and functional characterization of Saccharomyces cerevisiae eIF5A. The tertiary structure of yeast eIF5A was modeled based on the structure of its Leishmania mexicana homologue and this model was used to predict the structural localization of new site-directed and randomly generated mutations. Most of the 40 new mutants exhibited phenotypes that resulted from eIF-5A protein-folding defects. Our data provided evidence that the C-terminal alpha-helix present in yeast eIF5A is an essential structural element, whereas the eIF5A N-terminal 10 amino acid extension not present in archaeal eIF5A homologs, is not. Moreover, the mutants containing substitutions at or in the vicinity of the hypusine modification site displayed nonviable or temperature-sensitive phenotypes and were defective in hypusine modification. Interestingly, two of the temperature-sensitive strains produced stable mutant eIF5A proteins - eIF5A(K56A) and eIF5A(Q22H,L93F)- and showed defects in protein synthesis at the restrictive temperature. Our data revealed important structural features of eIF5A that are required for its vital role in cell viability and underscored an essential function of eIF5A in the translation step of gene expression.
Resumo:
Two L-amino acid oxidases (LAAOs) were identified by random sequencing of cDNA libraries from the venom glands of Bothrops moojeni (BmooLAAO) and Bothrops jararacussu (Bjussu LAAO). Phylogenetic analysis involving other SV-LAAOs showed sequence identities within the range 83-87% being closely related to those from Agkistrodon and Trimeresurus. Molecular modeling experiments indicated the FAD-binding, substrate-binding, and helical domains of Bmoo and Bjussu LAAOs. The RMS deviations obtained by the superposition of those domains and that from Calloselasma rhodostoma LAAO crystal structure confirm the high degree of structural similarity between these enzymes. Purified BjussuLAAO-I and BmooLAAO-I exhibited antiprotozoal activities which were demonstrated to be hydrogen-peroxide mediated. This is the first report on the isolation and identification of cDNAs encoding LAAOs from Bothrops venom. The findings here reported contribute to the overall structural elucidation of SV-LAAOs and will advance the understanding on their mode of action. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This work presents major results from a novel dynamic model intended to deterministically represent the complex relation between HIV-1 and the human immune system. The novel structure of the model extends previous work by representing different host anatomic compartments under a more in-depth cellular and molecular immunological phenomenology. Recently identified mechanisms related to HIV-1 infection as well as other well known relevant mechanisms typically ignored in mathematical models of HIV-1 pathogenesis and immunology, such as cell-cell transmission, are also addressed. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
In the present work, a detailed analysis of a Mediterranean TLC occurred in January 2014 has been conducted. The author is not aware of other studies regarding this particular event at the publication of this thesis. In order to outline the cyclone evolution, observational data, including weather-stations data, satellite data, radar data and photographic evidence, were collected at first. After having identified the cyclone path and its general features, the GLOBO, BOLAM and MOLOCH NWP models, developed at ISAC-CNR (Bologna), were used to simulate the phenomenon. Particular attention was paid on the Mediterranean phase as well as on the Atlantic phase, since the cyclone showed a well defined precursor up to 3 days before the minimum formation in the Alboran Sea. The Mediterranean phase has been studied using different combinations of GLOBO, BOLAM and MOLOCH models, so as to evaluate the best model chain to simulate this kind of phenomena. The BOLAM and MOLOCH models showed the best performance, by adjusting the path erroneously deviated in the National Centre for Environmental Prediction (NCEP) and ECMWF operational models. The analysis of the cyclone thermal phase shown the presence of a deep-warm core structure in many cases, thus confirming the tropical-like nature of the system. Furthermore, the results showed high sensitivity to initial conditions in the whole lifetime of the cyclone, while the Sea Surface Temperature (SST) modification leads only to small changes in the Adriatic phase. The Atlantic phase has been studied using GLOBO and BOLAM model and with the aid of the same methodology already developed. After tracing the precursor, in the form of a low-pressure system, from the American East Coast to Spain, the thermal phase analysis was conducted. The parameters obtained showed evidence of a deep-cold core asymmetric structure during the whole Atlantic phase, while the first contact with the Mediterranean Sea caused a sudden transition to a shallow-warm core structure. The examination of Potential Vorticity (PV) 3-dimensional structure revealed the presence of a PV streamer that individually formed over Greenland and eventually interacted with the low-pressure system over the Spanish coast, favouring the first phase of the cyclone baroclinic intensification. Finally, the development of an automated system that tracks and studies the thermal phase of Mediterranean cyclones has been encouraged. This could lead to the forecast of potential tropical transition, against with a minimum computational investment.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.