32 resultados para Facial Object Based Method
Resumo:
In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
This paper addresses the one-dimensional cutting stock problem when demand is a random variable. The problem is formulated as a two-stage stochastic nonlinear program with recourse. The first stage decision variables are the number of objects to be cut according to a cutting pattern. The second stage decision variables are the number of holding or backordering items due to the decisions made in the first stage. The problem`s objective is to minimize the total expected cost incurred in both stages, due to waste and holding or backordering penalties. A Simplex-based method with column generation is proposed for solving a linear relaxation of the resulting optimization problem. The proposed method is evaluated by using two well-known measures of uncertainty effects in stochastic programming: the value of stochastic solution-VSS-and the expected value of perfect information-EVPI. The optimal two-stage solution is shown to be more effective than the alternative wait-and-see and expected value approaches, even under small variations in the parameters of the problem.
Resumo:
In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Migrastatin, a macrolide natural product, and its structurally related analogs are potent inhibitors of cancer cell metastasis, invasion and migration. In the present work, a specialized fragment-based method was employed to develop QSAR models for a series of migrastatin and isomigrastatin analogs. Significant correlation coefficients were obtained (best model, q(2) = 0.76 and r(2) = 0.91) indicating that the QSAR models possess high internal consistency. The best model was then used to predict the potency of an external test set, and the predicted values were in good agreement with the experimental results (R(2) (pred) = 0.85). The final model and the corresponding contribution maps, combined with molecular modeling studies, provided important insights into the key structural features for the anticancer activity of this family of synthetic compounds based on natural products.
Resumo:
Royal palm tree peroxidase (RPTP) is a very stable enzyme in regards to acidity, temperature, H(2)O(2), and organic solvents. Thus, RPTP is a promising candidate for developing H(2)O(2)-sensitive biosensors for diverse applications in industry and analytical chemistry. RPTP belongs to the family of class III secretory plant peroxidases, which include horseradish peroxidase isozyme C, soybean and peanut peroxidases. Here we report the X-ray structure of native RPTP isolated from royal palm tree (Roystonea regia) refined to a resolution of 1.85 angstrom. RPTP has the same overall folding pattern of the plant peroxidase superfamily, and it contains one heme group and two calcium-binding sites in similar locations. The three-dimensional structure of RPTP was solved for a hydroperoxide complex state, and it revealed a bound 2-(N-morpholino) ethanesulfonic acid molecule (MES) positioned at a putative substrate-binding secondary site. Nine N-glycosylation sites are clearly defined in the RPTP electron-density maps, revealing for the first time conformations of the glycan chains of this highly glycosylated enzyme. Furthermore, statistical coupling analysis (SCA) of the plant peroxidase superfamily was performed. This sequence-based method identified a set of evolutionarily conserved sites that mapped to regions surrounding the heme prosthetic group. The SCA matrix also predicted a set of energetically coupled residues that are involved in the maintenance of the structural folding of plant peroxidases. The combination of crystallographic data and SCA analysis provides information about the key structural elements that could contribute to explaining the unique stability of RPTP. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
The strategy used to treat HCV infection depends on the genotype involved. An accurate and reliable genotyping method is therefore of paramount importance. We describe here, for the first time, the use of a liquid microarray for HCV genotyping. This liquid microarray is based on the 5'UTR - the most highly conserved region of HCV - and the variable region NS5B sequence. The simultaneous genotyping of two regions can be used to confirm findings and should detect inter-genotypic recombination. Plasma samples from 78 patients infected with viruses with genotypes and subtypes determined in the Versant (TM) HCV Genotype Assay LiPA (version I; Siemens Medical Solutions, Diagnostics Division, Fernwald, Germany) were tested with our new liquid microarray method. This method successfully determined the genotypes of 74 of the 78 samples previously genotyped in the Versant (TM) HCV Genotype Assay LiPA (74/78, 95%). The concordance between the two methods was 100% for genotype determination (74/74). At the subtype level, all 3a and 2b samples gave identical results with both methods (17/17 and 7/7, respectively). Two 2c samples were correctly identified by microarray, but could only be determined to the genotype level with the Versant (TM) HCV assay. Genotype ""1'' subtypes (1a and 1b) were correctly identified by the Versant (TM) HCV assay and the microarray in 68% and 40% of cases, respectively. No genotype discordance was found for any sample. HCV was successfully genotyped with both methods, and this is of prime importance for treatment planning. Liquid microarray assays may therefore be added to the list of methods suitable for HCV genotyping. It provides comparable results and may readily be adapted for the detection of other viruses frequently co-infecting HCV patients. Liquid array technology is thus a reliable and promising platform for HCV genotyping.
Resumo:
This research presents a method for frequency estimation in power systems using an adaptive filter based on the Least Mean Square Algorithm (LMS). In order to analyze a power system, three-phase voltages were converted into a complex signal applying the alpha beta-transform and the results were used in an adaptive filtering algorithm. Although the use of the complex LMS algorithm is described in the literature, this paper deals with some practical aspects of the algorithm implementation. In order to reduce computing time, a coefficient generator was implemented. For the algorithm validation, a computing simulation of a power system was carried Out using the ATP software. Many different situations were Simulated for the performance analysis of the proposed methodology. The results were compared to a commercial relay for validation, showing the advantages of the new method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This study presents a decision-making method for maintenance policy selection of power plants equipment. The method is based on risk analysis concepts. The method first step consists in identifying critical equipment both for power plant operational performance and availability based on risk concepts. The second step involves the proposal of a potential maintenance policy that could be applied to critical equipment in order to increase its availability. The costs associated with each potential maintenance policy must be estimated, including the maintenance costs and the cost of failure that measures the critical equipment failure consequences for the power plant operation. Once the failure probabilities and the costs of failures are estimated, a decision-making procedure is applied to select the best maintenance policy. The decision criterion is to minimize the equipment cost of failure, considering the costs and likelihood of occurrence of failure scenarios. The method is applied to the analysis of a lubrication oil system used in gas turbines journal bearings. The turbine has more than 150 MW nominal output, installed in an open cycle thermoelectric power plant. A design modification with the installation of a redundant oil pump is proposed for lubricating oil system availability improvement. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The crosstalk phenomenon consists in recording the volume-conducted electromyographic activity of muscles other than that under study. This interference may impair the correct interpretation of the results in a variety of experiments. A new protocol is presented here for crosstalk assessment between two muscles based on changes in their electrical activity following a reflex discharge in one of the muscles in response to nerve stimulation. A reflex compound muscle action potential (H-reflex) was used to induce a silent period in the muscle that causes the crosstalk, called here the remote muscle. The rationale is that if the activity recorded in the target muscle is influenced by a distant source (the remote muscle) a silent period observed in the electromyogram (EMG) of the remote muscle would coincide with a decrease in the EMG activity of the target muscle. The new crosstalk index is evaluated based on the root mean square (RMS) values of the EMGs obtained in two distinct periods (background EMG and silent period) of both the remote and the target muscles. In the present work the application focused on the estimation of the degree of crosstalk from the soleus muscle to the tibialis anterior muscle during quiet stance. However, the technique may be extended to other pairs of muscles provided a silent period may be evoked in one of them. (C) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
Resumo:
Paraquat is a broad-spectrum contact herbicide that has been encountered worldwide in several cases of accidental, homicidal, and suicidal poisonings. The pulmonary toxicity of this compound is related to the depletion of NADPH in the pneumocytes, which is continuously consumed by the reduction/oxidation of paraquat and reductase enzyme systems in the presence of O(2) (redox cycling). Based on this mechanism, an enzymatic-spectrophotometric method was developed for the determination of paraquat in urine samples. The velocity of NADPH consumption was monitored at 340 nm, every 10 s during 15 min. The velocity of NADPH oxidation correlated with the paraquat levels found in samples. The enzymatic-spectrophotometric method showed to be sensitive, making possible the detection of paraquat in urine samples at concentrations as low as 0.05 mg/L.
Resumo:
Modern lifestyle markedly changed eating habits worldwide, with an increasing demand for ready-to-eat foods, such as minimally processed fruits and leafy greens. Packaging and storage conditions of those products may favor the growth of psychrotrophic bacteria, including the pathogen Listeria monocytogenes. In this work, minimally processed leafy vegetables samples (n = 162) from retail market from Ribeirao Preto, Sao Paulo, Brazil, were tested for the presence or absence of Listeria spp. by the immunoassay Listeria Rapid Test, Oxoid. Two L. monocytogenes positive and six artificially contaminated samples of minimally processed leafy vegetables were evaluated by the Most Probable Number (MPN) with detection by classical culture method and also culture method combined with real-time PCR (RTi-PCR) for 16S rRNA genes of L monocytogenes. Positive MPN enrichment tubes were analyzed by RTi-PCR with primers specific for L. monocytogenes using the commercial preparation ABSOLUTET (TM) QPCR SYBR (R) Green Mix (ABgene, UK). Real-time PCR assay presented good exclusivity and inclusivity results and no statistical significant difference was found in comparison with the conventional culture method (p < 0.05). Moreover, RTi-PCR was fist and easy to perform, with MPN results obtained in ca. 48 h for RTi-PCR in comparison to 7 days for conventional method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We propose a discontinuous-Galerkin-based immersed boundary method for elasticity problems. The resulting numerical scheme does not require boundary fitting meshes and avoids boundary locking by switching the elements intersected by the boundary to a discontinuous Galerkin approximation. Special emphasis is placed on the construction of a method that retains an optimal convergence rate in the presence of non-homogeneous essential and natural boundary conditions. The role of each one of the approximations introduced is illustrated by analyzing an analog problem in one spatial dimension. Finally, extensive two- and three-dimensional numerical experiments on linear and nonlinear elasticity problems verify that the proposed method leads to optimal convergence rates under combinations of essential and natural boundary conditions. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A numerical method to approximate partial differential equations on meshes that do not conform to the domain boundaries is introduced. The proposed method is conceptually simple and free of user-defined parameters. Starting with a conforming finite element mesh, the key ingredient is to switch those elements intersected by the Dirichlet boundary to a discontinuous-Galerkin approximation and impose the Dirichlet boundary conditions strongly. By virtue of relaxing the continuity constraint at those elements. boundary locking is avoided and optimal-order convergence is achieved. This is shown through numerical experiments in reaction-diffusion problems. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
The concentrations of the water-soluble inorganic aerosol species, ammonium (NH4+), nitrate (NO3-), chloride (Cl-), and sulfate (SO42-), were measured from September to November 2002 at a pasture site in the Amazon Basin (Rondnia, Brazil) (LBA-SMOCC). Measurements were conducted using a semi-continuous technique (Wet-annular denuder/Steam-Jet Aerosol Collector: WAD/SJAC) and three integrating filter-based methods, namely (1) a denuder-filter pack (DFP: Teflon and impregnated Whatman filters), (2) a stacked-filter unit (SFU: polycarbonate filters), and (3) a High Volume dichotomous sampler (HiVol: quartz fiber filters). Measurements covered the late dry season (biomass burning), a transition period, and the onset of the wet season (clean conditions). Analyses of the particles collected on filters were performed using ion chromatography (IC) and Particle-Induced X-ray Emission spectrometry (PIXE). Season-dependent discrepancies were observed between the WAD/SJAC system and the filter-based samplers. During the dry season, when PM2.5 (D-p <= 2.5 mu m) concentrations were similar to 100 mu g m(-3), aerosol NH4+ and SO42- measured by the filter-based samplers were on average two times higher than those determined by the WAD/SJAC. Concentrations of aerosol NO3- and Cl- measured with the HiVol during daytime, and with the DFP during day- and nighttime also exceeded those of the WAD/SJAC by a factor of two. In contrast, aerosol NO3- and Cl- measured with the SFU during the dry season were nearly two times lower than those measured by the WAD/SJAC. These differences declined markedly during the transition period and towards the cleaner conditions during the onset of the wet season (PM2.5 similar to 5 mu g m(-3)); when filter-based samplers measured on average 40-90% less than the WAD/SJAC. The differences were not due to consistent systematic biases of the analytical techniques, but were apparently a result of prevailing environmental conditions and different sampling procedures. For the transition period and wet season, the significance of our results is reduced by a low number of data points. We argue that the observed differences are mainly attributable to (a) positive and negative filter sampling artifacts, (b) presence of organic compounds and organosulfates on filter substrates, and (c) a SJAC sampling efficiency of less than 100%.