970 resultados para Computational methods
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Two methods to evaluate the state transition matrix are implemented and analyzed to verify the computational cost and the accuracy of both methods. This evaluation represents one of the highest computational costs on the artificial satellite orbit determination task. The first method is an approximation of the Keplerian motion, providing an analytical solution which is then calculated numerically by solving Kepler's equation. The second one is a local numerical approximation that includes the effect of J(2). The analysis is performed comparing these two methods with a reference generated by a numerical integrator. For small intervals of time (1 to 10s) and when one needs more accuracy, it is recommended to use the second method, since the CPU time does not excessively overload the computer during the orbit determination procedure. For larger intervals of time and when one expects more stability on the calculation, it is recommended to use the first method.
Resumo:
Biometrics is one of the biggest tendencies in human identification. The fingerprint is the most widely used biometric. However considering the automatic fingerprint recognition a completely solved problem is a common mistake. The most popular and extensively used methods, the minutiae-based, do not perform well on poor-quality images and when just a small area of overlap between the template and the query images exists. The use of multibiometrics is considered one of the keys to overcome the weakness and improve the accuracy of biometrics systems. This paper presents the fusion of a minutiae-based and a ridge-based fingerprint recognition method at rank, decision and score level. The fusion techniques implemented leaded to a reduction of the Equal Error Rate by 31.78% (from 4.09% to 2.79%) and a decreasing of 6 positions in the rank to reach a Correct Retrieval (from rank 8 to 2) when assessed in the FVC2002-DB1A database. © 2008 IEEE.
Resumo:
Because the biomechanical behavior of dental implants is different from that of natural tooth, clinical problems may occur. The mechanism of stress distribution and load transfer to the implant/bone interface is a critical issue affecting the success rate of implants. Therefore, the aim of this study was to conduct a brief literature review of the available stress analysis methods to study implant-supported prosthesis loading and to discuss their contributions in the biomechanical evaluation of oral rehabilitation with implants. Several studies have used experimental, analytical, and computational models by means of finite element models (FEM), photoelasticity, strain gauges and associations of these methods to evaluate the biomechanical behavior of dental implants. The FEM has been used to evaluate new components, configurations, materials, and shapes of implants. The greatest advantage of the photoelastic method is the ability to visualize the stresses in complex structures, such as oral structures, and to observe the stress patterns in the whole model, allowing the researcher to localize and quantify the stress magnitude. Strain gauges can be used to assess in vivo and in vitro stress in prostheses, implants, and teeth. Some authors use the strain gauge technique with photoelasticity or FEM techniques. These methodologies can be widely applied in dentistry, mainly in the research field. Therefore, they can guide further research and clinical studies by predicting some disadvantages and streamlining clinical time.
Resumo:
In this paper we report on a search for short-duration gravitational wave bursts in the frequency range 64 Hz-1792 Hz associated with gamma-ray bursts (GRBs), using data from GEO 600 and one of the LIGO or Virgo detectors. We introduce the method of a linear search grid to analyze GRB events with large sky localization uncertainties, for example the localizations provided by the Fermi Gamma-ray Burst Monitor (GBM). Coherent searches for gravitational waves (GWs) can be computationally intensive when the GRB sky position is not well localized, due to the corrections required for the difference in arrival time between detectors. Using a linear search grid we are able to reduce the computational cost of the analysis by a factor of O(10) for GBM events. Furthermore, we demonstrate that our analysis pipeline can improve upon the sky localization of GRBs detected by the GBM, if a high-frequency GW signal is observed in coincidence. We use the method of the linear grid in a search for GWs associated with 129 GRBs observed satellite-based gamma-ray experiments between 2006 and 2011. The GRBs in our sample had not been previously analyzed for GW counterparts. A fraction of our GRB events are analyzed using data from GEO 600 while the detector was using squeezed-light states to improve its sensitivity; this is the first search for GWs using data from a squeezed-light interferometric observatory. We find no evidence for GW signals, either with any individual GRB in this sample or with the population as a whole. For each GRB we place lower bounds on the distance to the progenitor, under an assumption of a fixed GW emission energy of 10(-2)M circle dot c(2), with a median exclusion distance of 0.8 Mpc for emission at 500 Hz and 0.3 Mpc at 1 kHz. The reduced computational cost associated with a linear search grid will enable rapid searches for GWs associated with Fermi GBM events once the advanced LIGO and Virgo detectors begin operation.
Resumo:
Purpose - The purpose of this paper is twofold: to analyze the computational complexity of the cogeneration design problem; to present an expert system to solve the proposed problem, comparing such an approach with the traditional searching methods available.Design/methodology/approach - The complexity of the cogeneration problem is analyzed through the transformation of the well-known knapsack problem. Both problems are formulated as decision problems and it is proven that the cogeneration problem is np-complete. Thus, several searching approaches, such as population heuristics and dynamic programming, could be used to solve the problem. Alternatively, a knowledge-based approach is proposed by presenting an expert system and its knowledge representation scheme.Findings - The expert system is executed considering two case-studies. First, a cogeneration plant should meet power, steam, chilled water and hot water demands. The expert system presented two different solutions based on high complexity thermodynamic cycles. In the second case-study the plant should meet just power and steam demands. The system presents three different solutions, and one of them was never considered before by our consultant expert.Originality/value - The expert system approach is not a "blind" method, i.e. it generates solutions based on actual engineering knowledge instead of the searching strategies from traditional methods. It means that the system is able to explain its choices, making available the design rationale for each solution. This is the main advantage of the expert system approach over the traditional search methods. On the other hand, the expert system quite likely does not provide an actual optimal solution. All it can provide is one or more acceptable solutions.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Modeling is a step to perform a finite element analysis. Different methods of model construction are reported in literature, as the Bio-CAD modeling. The purpose of this study was to perform a model evaluation and application using two methods of Bio-CAD modeling from human edentulous hemi-mandible on the finite element analysis. From CT scans of dried human skull was reconstructed a stereolithographic model. Two methods of modeling were performed: STL conversion approach (Model 1) associated to STL simplification and reverse engineering approach (Model 2). For finite element analysis was used the action of lateral pterygoid muscle as loading condition to assess total displacement (D), equivalent von-Mises stress (VM) and maximum principal stress (MP). Two models presented differences on the geometry regarding surface number (1834 (model 1); 282 (model 2)). Were observed differences in finite element mesh regarding element number (30428 nodes/16683 elements (model 1); 15801 nodes/8410 elements (model 2). D, VM and MP stress areas presented similar distribution in two models. The values were different regarding maximum and minimum values of D (ranging 0-0.511 mm (model 1) and 0-0.544 mm (model 2), VM stress (6.36E-04-11.4 MPa (model 1) and 2.15E-04-14.7 MPa (model 2) and MP stress (-1.43-9.14 MPa (model 1) and -1.2-11.6 MPa (model 2). From two methods of Bio-CAD modeling, the reverse engineering presented better anatomical representation compared to the STL conversion approach. The models presented differences in the finite element mesh, total displacement and stress distribution.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen
Resumo:
RATIONALE: Oxazolines have attracted the attention of researchers worldwide due to their versatility as carboxylic acid protecting groups, chiral auxiliaries, and ligands for asymmetric catalysis. Electrospray ionization tandem mass spectrometric (ESI-MS/MS) analysis of five 2-oxazoline derivatives has been conducted, in order to understand the influence of the side chain on the gas-phase dissociation of these protonated compounds under collision-induced dissociation (CID) conditions. METHODS: Mass spectrometric analyses were conducted in a quadrupole time-of-flight (Q-TOF) spectrometer fitted with electrospray ionization source. Protonation sites have been proposed on the basis of the gas-phase basicity, proton affinity, atomic charges, and a molecular electrostatic potential map obtained on the basis of the quantum chemistry calculations at the B3LYP/6-31 + G(d, p) and G2(MP2) levels. RESULTS: Analysis of the atomic charges, gas-phase basicity and proton affinities values indicates that the nitrogen atom is a possible proton acceptor site. On the basis of these results, two main fragmentation processes have been suggested: one taking place via neutral elimination of the oxazoline moiety (99 u) and another occurring by sequential elimination of neutral fragments with 72 u and 27 u. These processes should lead to formation of R+. CONCLUSIONS: The ESI-MS/MS experiments have shown that the side chain could affect the dissociation mechanism of protonated 2-oxazoline derivatives. For the compound that exhibits a hydroxyl at the lateral chain, water loss has been suggested to happen through an E2-type elimination, in an exothermic step. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Resumo:
In order to understand the influence of alkyl side chains on the gas-phase reactivity of 1,4-naphthoquinone derivatives, some 2-hydroxy-1,4-naphthoquinone derivatives have been prepared and studied by electrospray ionization tandem mass spectrometry in combination with computational quantum chemistry calculations. Protonation and deprotonation sites were suggested on the basis of gas-phase basicity, proton affinity, gas-phase acidity (?Gacid), atomic charges and frontier orbital analyses. The nature of the intramolecular interaction as well as of the hydrogen bond in the systems was investigated by the atoms-in-molecules theory and the natural bond orbital analysis. The results were compared with data published for lapachol (2-hydroxy-3-(3-methyl-2-butenyl)-1,4-naphthoquinone). For the protonated molecules, water elimination was verified to occur at lower proportion when compared with side chain elimination, as evidenced in earlier studies on lapachol. The side chain at position C(3) was found to play important roles in the fragmentation mechanisms of these compounds. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.