983 resultados para structural calculate criteria
Resumo:
This study proposes an optimized approach of designing in which a model specially shaped composite tank for spacecrafts is built by applying finite element analysis. The composite layers are preliminarily designed by combining quasi-network design method with numerical simulation, which determines the ratio between the angle and the thickness of layers as the initial value of the optimized design. By adopting an adaptive simulated annealing algorithm, the angles and the numbers of layers at each angle are optimized to minimize the weight of structure. Based on this, the stacking sequence of composite layers is formulated according to the number of layers in the optimized structure by applying the enumeration method and combining the general design parameters. Numerical simulation is finally adopted to calculate the buckling limit of tanks in different designing methods. This study takes a composite tank with a cone-shaped cylinder body as example, in which ellipsoid head section and outer wall plate are selected as the object to validate this method. The result shows that the quasi-network design method can improve the design quality of composite material layer in tanks with complex preliminarily loading conditions. The adaptive simulated annealing algorithm can reduce the initial design weight by 30%, which effectively probes the global optimal solution and optimizes the weight of structure. It can be therefore proved that, this optimization method is capable of designing and optimizing specially shaped composite tanks with complex loading conditions.
Resumo:
Understanding how the brain matures in healthy individuals is critical for evaluating deviations from normal development in psychiatric and neurodevelopmental disorders. The brain's anatomical networks are profoundly re-modeled between childhood and adulthood, and diffusion tractography offers unprecedented power to reconstruct these networks and neural pathways in vivo. Here we tracked changes in structural connectivity and network efficiency in 439 right-handed individuals aged 12 to 30 (211 female/126 male adults, mean age=23.6, SD=2.19; 31 female/24 male 12 year olds, mean age=12.3, SD=0.18; and 25 female/22 male 16 year olds, mean age=16.2, SD=0.37). All participants were scanned with high angular resolution diffusion imaging (HARDI) at 4 T. After we performed whole brain tractography, 70 cortical gyral-based regions of interest were extracted from each participant's co-registered anatomical scans. The proportion of fiber connections between all pairs of cortical regions, or nodes, was found to create symmetric fiber density matrices, reflecting the structural brain network. From those 70 × 70 matrices we computed graph theory metrics characterizing structural connectivity. Several key global and nodal metrics changed across development, showing increased network integration, with some connections pruned and others strengthened. The increases and decreases in fiber density, however, were not distributed proportionally across the brain. The frontal cortex had a disproportionate number of decreases in fiber density while the temporal cortex had a disproportionate number of increases in fiber density. This large-scale analysis of the developing structural connectome offers a foundation to develop statistical criteria for aberrant brain connectivity as the human brain matures.
Resumo:
The thermally driven Structural phase transition in the organic-inorganic hybrid perovskite (CnH2n+1NH3)(2)PbI4 has been investigated using molecular dynamics (MD) simulations. This system consists of positively charged alkyl-amine chains anchored to a rigid negatively charged PbI4 sheet with the chains organized as bilayers with a herringbone arrangement. Atomistic simulations were performed using ail isothermal-isobaric ensemble over a wide temperature range from 65 to 665 K for different alkyl chain lengths, n = 12, 14, 16, and 18. The simulations are able to reproduce the essential Features of the experimental observations of this system, including the existence of a transition, the linear variation of the transition temperature with alkyl chain length, and the expansion of the bilayer thickness at the transition. By use of the distance fluctuation Criteria, it is Shown that the transition is associated With a Melting of the alkyl chains of the anchored bilayer. Ail analysis of the conformation of the alkyl chains shows increased disorder in the form of gauche defects above due melting transition. Simulations also show that the melting transition is characterized by the complete disappearance of all-trans alkyl chains in the anchored bilayer, in agreement with experimental observations. A conformationally disordered chain has a larger effective cross-sectional area, and above due transition a uniformly tilted arrangement of the anchored chains call no longer be Sustained. At the melt the angular distribution of the orientation of the chains are 110 longer uniform; the chains are splayed allowing for increased space for individual chains of the anchored bilayer. This is reflected in a sharp rise in the ratio of the mean head-to-head to tail-to-tail distance of the chains of the bilayer at the transition resulting in in expansion of the bilayer thickness. The present MD simulations provide a simple explanation as to how changes in conformation of individual alkyl-chains gives rise to the observed increase in the interlayer lattice spacing of (CnH2n+1NH3)(2)PbI4 at the melting transition.
Resumo:
This paper presents methodologies for fracture analysis of concrete structural components with and without considering tension softening effect. Stress intensity factor (SIF) is computed by using analytical approach and finite element analysis. In the analytical approach, SW accounting for tension softening effect has been obtained as the difference of SIP obtained using linear elastic fracture mechanics (LEFM) principles and SIP due to closing pressure. Superposition principle has been used by accounting for non-linearity in incremental form. SW due to crack closing force applied on the effective crack face inside the process zone has been computed using Green's function approach. In finite element analysis, the domain integral method has been used for computation of SIR The domain integral method is used to calculate the strain energy release rate and SIF when a crack grows. Numerical studies have been conducted on notched 3-point bending concrete specimen with and without considering the cohesive stresses. It is observed from the studies that SW obtained from the finite element analysis with and without considering the cohesive stresses is in good agreement with the corresponding analytical value. The effect of cohesive stress on SW decreases with increase of crack length. Further, studies have been conducted on geometrically similar structures and observed that (i) the effect of cohesive stress on SW is significant with increase of load for a particular crack length and (iii) SW values decreases with increase of tensile strength for a particular crack length and load.
Resumo:
Since the discovery 1] of gamma' precipitate (L1(2) - Co-3 (Al, W)) in the Co-Al-W ternary system, there has been an increased interest in Co-based superalloys. Since these alloys have two phase microstructures (gamma + gamma') similar to Ni-based superalloys 2], they are viable candidates in high temperature applications, particularly in land-based turbines. The role of alloying on stability of the gamma' phase has been an active area of research. In this study, electronic structure calculations were done to probe the effect of alloying in Co3W with L1(2) structure. Compositions of type Co-3(W, X), (where X/Y = Mn, Fe, Ni, Pt, Cr, Al, Si, V, W, Ta, Ti, Nb, Hf, Zr and Mo) were studied. Effect of alloying on equilibrium lattice parameters and ground state energies was used to calculate Vegard's coefficients and site preference related data. The effect of alloying on the stability of the L1(2) structure vis a vis other geometrically close packed ordered structures was also studied for a range of Co3X compounds. Results suggest that the penchant of element for the W sublattice can be predicted by comparing heats of formation of Co3X in different structures.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.
In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.
The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.
The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.
Resumo:
Passarelas de pedestres com arquitetura moderna, esbeltas e leves são uma constante nos dias atuais, apresentando grandes vãos e novos materiais. Este arrojo arquitetônico tem gerado inúmeros problemas de vibrações excessivas, especialmente sobre passarelas mistas (aço-concreto). As normas e recomendações de projeto consideram, ainda, que as forças induzidas pelo caminhar humano são determinísticas. Todavia, o caminhar humano e as respectivas forças dinâmicas geradas apresentam comportamento randômico. Deste modo, o presente trabalho de pesquisa objetiva contribuir com os projetistas estruturais, a partir do emprego de uma abordagem probabilística para avaliação do estado limite de utilização deste tipo de estrutura, associado a vibrações excessivas que podem vir a causar desconforto humano. Para tal, utiliza-se como modelo estrutural uma passarela de pedestres mista (aço-concreto) construída no campus do Instituto de Traumatologia e Ortopedia (INTO), na cidade do Rio de Janeiro. Com base na utilização dos métodos probabilísticos, torna-se possível determinar a probabilidade dos valores das acelerações de pico da estrutura ultrapassarem ou não os critérios de conforto humano estabelecidos em normas e recomendações de projeto. Os resultados apontam para o fato de que os valores das acelerações de pico calculadas com base exclusivamente nos métodos determinísticos podem ser superestimados em algumas situações de projeto.
Resumo:
Inaccuracy in the aging of postovulatory follicles (POFs) and in estimating the effect of temperature on the resorption rate of POFs may introduce bias in the determination of the daily spawning age classes with the daily egg production method (DEPM). To explore the above two bias problems with f ield-collected European pilchard (Sardina pilchardus, known regionally as the Iberian sardine), a method was developed in which the time elapsed from spawning (POF age) was estimated from the size of POFs (i.e., from the cross-sectional area in histological sections). The potential effect of the preservative type and embedding material on POF size and the effect of ambient water temperature on POF resorption rate are taken into account with this method. A highly significant loglinear relationship was found between POF area and age; POF area shrank by approximately 50% per day. POFs were also shown to shrink faster at higher temperatures (approximately 3% per degree), but this temperature effect is unlikely to be an important source of bias in the assignment of females to daily spawning classes. The embedding material was also shown to influence the size of POFs, the latter being significantly larger in resin than in paraffin sections. In conclusion, the size of POFs provides an indirect, reliable estimation of the time elapsed from spawning and may thus be used to test both the validity of POF staging criteria for identifying daily classes of spawners and the effect of other factors (such as temperature and laboratory processing) in applications of the DEPM to S. pilchardus and other fish species.
Resumo:
The Dependency Structure Matrix (DSM) has proved to be a useful tool for system structure elicitation and analysis. However, as with any modelling approach, the insights gained from analysis are limited by the quality and correctness of input information. This paper explores how the quality of data in a DSM can be enhanced by elicitation methods which include comparison of information acquired from different perspectives and levels of abstraction. The approach is based on comparison of dependencies according to their structural importance. It is illustrated through two case studies: creation of a DSM showing the spatial connections between elements in a product, and a DSM capturing information flows in an organisation. We conclude that considering structural criteria can lead to improved data quality in DSM models, although further research is required to fully explore the benefits and limitations of our proposed approach.
Resumo:
The exponential growth of the world population has led to an increase of settlements often located in areas prone to natural disasters, including earthquakes. Consequently, despite the important advances in the field of natural catastrophes modelling and risk mitigation actions, the overall human losses have continued to increase and unprecedented economic losses have been registered. In the research work presented herein, various areas of earthquake engineering and seismology are thoroughly investigated, and a case study application for mainland Portugal is performed. Seismic risk assessment is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible numerical tools and software. In the present work, an open-source platform for seismic hazard and risk assessment is developed. This software is capable of computing the distribution of losses or damage for an earthquake scenario (deterministic event-based) or earthquake losses due to all the possible seismic events that might occur within a region for a given interval of time (probabilistic event-based). This effort has been developed following an open and transparent philosophy and therefore, it is available to any individual or institution. The estimation of the seismic risk depends mainly on three components: seismic hazard, exposure and vulnerability. The latter component assumes special importance, as by intervening with appropriate retrofitting solutions, it may be possible to decrease directly the seismic risk. The employment of analytical methodologies is fundamental in the assessment of structural vulnerability, particularly in regions where post-earthquake building damage might not be available. Several common methodologies are investigated, and conclusions are yielded regarding the method that can provide an optimal balance between accuracy and computational effort. In addition, a simplified approach based on the displacement-based earthquake loss assessment (DBELA) is proposed, which allows for the rapid estimation of fragility curves, considering a wide spectrum of uncertainties. A novel vulnerability model for the reinforced concrete building stock in Portugal is proposed in this work, using statistical information collected from hundreds of real buildings. An analytical approach based on nonlinear time history analysis is adopted and the impact of a set of key parameters investigated, including the damage state criteria and the chosen intensity measure type. A comprehensive review of previous studies that contributed to the understanding of the seismic hazard and risk for Portugal is presented. An existing seismic source model was employed with recently proposed attenuation models to calculate probabilistic seismic hazard throughout the territory. The latter results are combined with information from the 2011 Building Census and the aforementioned vulnerability model to estimate economic loss maps for a return period of 475 years. These losses are disaggregated across the different building typologies and conclusions are yielded regarding the type of construction more vulnerable to seismic activity.
Resumo:
We extend the class of M-tests for a unit root analyzed by Perron and Ng (1996) and Ng and Perron (1997) to the case where a change in the trend function is allowed to occur at an unknown time. These tests M(GLS) adopt the GLS detrending approach of Dufour and King (1991) and Elliott, Rothenberg and Stock (1996) (ERS). Following Perron (1989), we consider two models : one allowing for a change in slope and the other for both a change in intercept and slope. We derive the asymptotic distribution of the tests as well as that of the feasible point optimal tests PT(GLS) suggested by ERS. The asymptotic critical values of the tests are tabulated. Also, we compute the non-centrality parameter used for the local GLS detrending that permits the tests to have 50% asymptotic power at that value. We show that the M(GLS) and PT(GLS) tests have an asymptotic power function close to the power envelope. An extensive simulation study analyzes the size and power in finite samples under various methods to select the truncation lag for the autoregressive spectral density estimator. An empirical application is also provided.
Resumo:
Les résultats ont été obtenus avec le logiciel "Insight-2" de Accelris (San Diego, CA)
Resumo:
Three dimensional (3D) composites are strong contenders for the structural applications in situations like aerospace,aircraft and automotive industries where multidirectional thermal and mechanical stresses exist. The presence of reinforcement along the thickness direction in 3D composites,increases the through the thickness stiffness and strength properties.The 3D preforms can be manufactured with numerous complex architecture variations to meet the needs of specific applications.For hot structure applications Carbon-Carbon(C-C) composites are generally used,whose property variation with respect to temperature is essential for carrying out the design of hot structures.The thermomechanical behavior of 3D composites is not fully understood and reported.The methodology to find the thermomechanical properties using analytical modelling of 3D woven,3D 4-axes braided and 3D 5-axes braided composites from Representative Unit Cells(RUC's) based on constitutive equations for 3D composites has been dealt in the present study.High Temperature Unidirectional (UD) Carbon-Carbon material properties have been evaluated using analytical methods,viz.,Composite cylinder assemblage Model and Method of Cells based on experiments carried out on Carbon-Carbon fabric composite for a temparature range of 300 degreeK to 2800degreeK.These properties have been used for evaluating the 3D composite properties.From among the existing methods of solution sequences for 3D composites,"3D composite Strength Model" has been identified as the most suitable method.For thegeneration of material properies of RUC's od 3D composites,software has been developed using MATLAB.Correlaton of the analytically determined properties with test results available in literature has been established.Parametric studies on the variation of all the thermomechanical constants for different 3D performs of Carbon-Carbon material have been studied and selection criteria have been formulated for their applications for the hot structures.Procedure for the structural design of hot structures made of 3D Carbon-Carbon composites has been established through the numerical investigations on a Nosecap.Nonlinear transient thermal and nonlinear transient thermo-structural analysis on the Nosecap have been carried out using finite element software NASTRAN.Failure indices have been established for the identified performs,identification of suitable 3D composite based on parametric studies on strength properties and recommendation of this material for Nosecap of RLV based on structural performance have been carried out in this Study.Based on the 3D failure theory the best perform for the Nosecap has been identified as 4-axis 15degree braided composite.
Resumo:
The remarkable difference between the nuclear quadrupole frequencies v_Q of Cu(1) and Cu(2) in YBa_2Cu_3O_6 and YBa_2Cu_3O_7 is analyzed. We calculate the ionic contribution to the electric field gradients and estimate, by using experimental results for Cu_2O and La_2CuO_4, the contribution of the d valence electrons. Thus, we determine v_Q1, v_Q2, and the asymmetry parameter η for YBa_2Cu_3O_6 and YBa_2Cu_3O_7. The number of holes in dthe Cu-O planes and chains is found to be important for the different behavior of v_Q1 and v_Q2.