894 resultados para Numerical approximation and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents three different numerical models for the evaluation of the stresses in corrugated sheets under bending. Regarding the numerical simulations different approaches can be considered, i.e., a elastic linear analysis or a physical nonlinear analysis, that considers criteria to fail for the sheet material. Moreover, the construction of the finite element mesh can be used shell elements or solid elements. The choice of each finite element must be made from the consideration of their representativity before behavior to be simulated. Thus, the numerical modelling in this manuscript was performed from the three-dimensional models using the SAP2000Nonlinear software, version 7.42, which has as base the finite elements method (FEM). It was considered shell elements in the build the mesh of finite elements and an analysis of type elastic linear in this case. Five mm thick sheets were evaluated considering three different longitudinal dimensions (spans), i.e., 1100 mm, 1530 mm and 1830 mm. The applied load to the models was 2500 N/m and it was verified that the spans of support of sheets have a significant influence on the results of stresses. The sheets with larger spans present larger stresses for the same applied load. The most intense values of tension occur in the troughs (low waves) of the sheets, on the lower surface, while the most intense values of compression occur in the crests (high waves), on the upper surface of the sheet. The flanks, which are the parts among the troughs and crests of the sheets, are submitted to low levels of stresses. The numeric results of the stresses showed a good agreement with the results obtained from other researchers(3) and these results can be used to predict the behavior of corrugated sheets under bending.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An excitation force that is not influenced by the system's states is said to be an ideal energy source. In real situations, a direct and feedback coupling between the excitation source and the system must always exist. This manifestation of the law of conversation of energy is known as Sommerfeld Effect. In the case of obtaining a mathematical model for such system, additional equations are usually necessary to describe the vibration sources and their coupling with the mechanical system. In this work, a cantilever beam and a non-ideal electric DC motor that is fixed to the beam free end is analyzed. The motor has an unbalanced mass that provides excitation to the system proportional to the current applied to the motor. During the motor's coast up operation, as the excitation frequency gets closer to the beam first natural frequency and if the drive power increases further, the DC motor speed remains constant until it suddenly jumps to a much higher value (simultaneously the vibration amplitude jumps to a much lower value) upon exceeding a critical input power. It was found that the Sommerfeld effect depends on some system parameters and the motor operational procedures. These parameters are explored to avoid the resonance capture in Sommerfeld effect. Numerical simulations and experimental tests are used to help insight this dynamic behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sparse traffic grooming is a practical problem to be addressed in heterogeneous multi-vendor optical WDM networks where only some of the optical cross-connects (OXCs) have grooming capabilities. Such a network is called as a sparse grooming network. The sparse grooming problem under dynamic traffic in optical WDM mesh networks is a relatively unexplored problem. In this work, we propose the maximize-lightpath-sharing multi-hop (MLS-MH) grooming algorithm to support dynamic traffic grooming in sparse grooming networks. We also present an analytical model to evaluate the blocking performance of the MLS-MH algorithm. Simulation results show that MLSMH outperforms an existing grooming algorithm, the shortest path single-hop (SPSH) algorithm. The numerical results from analysis show that it matches closely with the simulation. The effect of the number of grooming nodes in the network on the blocking performance is also analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Warrick and Hussen developed in the nineties of the last century a method to scale Richards' equation (RE) for similar soils. In this paper, new scaled solutions are added to the method of Warrick and Hussen considering a wider range of soils regardless of their dissimilarity. Gardner-Kozeny hydraulic functions are adopted instead of Brooks-Corey functions used originally by Warrick and Hussen. These functions allow to reduce the dependence of the scaled RE on the soil properties. To evaluate the proposed method (PM), the scaled RE was solved numerically using a finite difference method with a fully implicit scheme. Three cases were considered: constant-head infiltration, constant-flux infiltration, and drainage of an initially uniform wet soil. The results for five texturally different soils ranging from sand to clay (adopted from the literature) showed that the scaled solutions were invariant to a satisfactory degree. However, slight deviations were observed mainly for the sandy soil. Moreover, the scaled solutions deviated when the soil profile was initially wet in the infiltration case or when deeply wet in the drainage condition. Based on the PM, a Philip-type model was also developed to approximate RE solutions for the constant-head infiltration. The model showed a good agreement with the scaled RE for the same range of soils and conditions, however only for Gardner-Kozeny soils. Such a procedure reduces numerical calculations and provides additional opportunities for solving the highly nonlinear RE for unsaturated water flow in soils. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The aim of this study was to compare the correspondence between gap formation and apical microleakage in root canals filled with epoxy resin-based (AH Plus) combined or not with resinous primer or with a dimethacrylate-based root canal sealer (Epiphany). Material and Methods: Thirty-nine lower single-rooted human premolars were filled by the lateral condensation technique (LC) and immersed in a 50-wt% aqueous silver nitrate solution at 37 degrees C (24 h). After longitudinal sectioning, epoxy resin replicas were made from the tooth specimens. Both the replicas and the specimens were prepared for scanning electron microscopy (SEM). The gaps were observed in the replicas. Apical microleakage was detected in the specimens by SEM/energy dispersive spectroscopy (SEM/EDS). The data were analyzed statistically using an Ordinal Logistic Regression model and Analysis of Correspondence (alpha=0.05). Results: Epiphany presented more regions containing gaps between dentin and sealer (p<0.05). There was correspondence between the presence of gaps and microleakage (p<0.05). Microleakage was similar among the root-filling materials (p>0.05). Conclusions: The resinous primer did not improve the sealing ability of AH Plus sealer and the presence of gaps had an effect on apical microleakage for all materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flow around circular smooth fixed cylinder in a large range of Reynolds numbers is considered in this paper. In order to investigate this canonical case, we perform CFD calculations and apply verification & validation (V&V) procedures to draw conclusions regarding numerical error and, afterwards, assess the modeling errors and capabilities of this (U)RANS method to solve the problem. Eight Reynolds numbers between Re = 10 and Re 5 x 10(5) will be presented with, at least, four geometrically similar grids and five discretization in time for each case (when unsteady), together with strict control of iterative and round-off errors, allowing a consistent verification analysis with uncertainty estimation. Two-dimensional RANS, steady or unsteady, laminar or turbulent calculations are performed. The original 1994 k - omega SST turbulence model by Menter is used to model turbulence. The validation procedure is performed by comparing the numerical results with an extensive set of experimental results compiled from the literature. [DOI: 10.1115/1.4007571]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the prevalence of the urinary excretion of BKV and JCV in HIV-infected patients without neurological symptoms. METHODS: Urine samples from HIV-infected patients without neurological symptoms were tested for JC virus and BK virus by PCR. Samples were screened for the presence of polyomavirus with sets of primers complementary to the early region of JCV and BKV genome (AgT). The presence of JC virus or BK virus were confirmed by two other PCR assays using sets of primers complementary to the VP1 gene of each virus. Analysis of the data was performed by the Kruskal-Wallis test for numerical data and Pearson or Yates for categorical variables. RESULTS: A total of 75 patients were included in the study. The overall prevalence of polyomavirus DNA urinary shedding was 67/75 (89.3%). Only BKV DNA was detected in 14/75 (18.7%) urine samples, and only JCV DNA was detected in 11/75 (14.7%) samples. Both BKV and JCV DNA were present in 42/75 (56.0%) samples. CONCLUSION: In this study we found high rates of excretion of JCV, BKV, and simultaneous excretion in HIV+ patients. Also these results differ from the others available on the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]Isogeometric analysis (IGA) has arisen as an attempt to unify the fields of CAD and classical finite element methods. The main idea of IGA consists in using for analysis the same functions (splines) that are used in CAD representation of the geometry. The main advantage with respect to the traditional finite element method is a higher smoothness of the numerical solution and more accurate representation of the geometry. IGA seems to be a promising tool with wide range of applications in engineering. However, this relatively new technique have some open problems that require a solution. In this work we present our results and contributions to this issue…

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term Congenital Nystagmus (Early Onset Nystagmus or Infantile Nystagmus Syndrome) refers to a pathology characterised by an involuntary movement of the eyes, which often seriously reduces a subject’s vision. Congenital Nystagmus (CN) is a specific kind of nystagmus within the wider classification of infantile nystagmus, which can be best recognized and classified by means of a combination of clinical investigations and motility analysis; in some cases, eye movement recording and analysis are indispensable for diagnosis. However, interpretation of eye movement recordings still lacks of complete reliability; hence new analysis techniques and precise identification of concise parameters directly related to visual acuity are necessary to further support physicians’ decisions. To this aim, an index computed from eye movement recordings and related to the visual acuity of a subject is proposed in this thesis. This estimator is based on two parameters: the time spent by a subject effectively viewing a target (foveation time - Tf) and the standard deviation of eye position (SDp). Moreover, since previous studies have shown that visual acuity largely depends on SDp, a data collection pilot study was also conducted with the purpose of specifically identifying eventual slow rhythmic component in the eye position and to characterise in more detail the SDp. The results are presented in this thesis. In addition, some oculomotor system models are reviewed and a new approach to those models, i.e. the recovery of periodic orbits of the oculomotor system in patients with CN, is tested on real patients data. In conclusion, the results obtained within this research consent to completely and reliably characterise the slow rhythmic component sometimes present in eye position recordings of CN subjects and to better classify the different kinds of CN waveforms. Those findings can successfully support the clinicians in therapy planning and treatment outcome evaluation.