75 resultados para Fuzzy c-means algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A graph clustering algorithm constructs groups of closely related parts and machines separately. After they are matched for the least intercell moves, a refining process runs on the initial cell formation to decrease the number of intercell moves. A simple modification of this main approach can deal with some practical constraints, such as the popular constraint of bounding the maximum number of machines in a cell. Our approach makes a big improvement in the computational time. More importantly, improvement is seen in the number of intercell moves when the computational results were compared with best known solutions from the literature. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Although various techniques have been used for breast conservation surgery reconstruction, there are few studies describing a logical approach to reconstruction of these defects. The objectives of this study were to establish a classification system for partial breast defects and to develop a reconstructive algorithm. Methods: The authors reviewed a 7-year experience with 209 immediate breast conservation surgery reconstructions. Mean follow-up was 31 months. Type I defects include tissue resection in smaller breasts (bra size A/B), including type IA, which involves minimal defects that do not cause distortion; type III, which involves moderate defects that cause moderate distortion; and type IC, which involves large defects that cause significant deformities. Type II includes tissue resection in medium-sized breasts with or without ptosis (bra size C), and type III includes tissue resection in large breasts with ptosis (bra size D). Results: Eighteen percent of patients presented type I, where a lateral thoracodorsal flap and a latissimus dorsi flap were performed in 68 percent. Forty-five percent presented type II defects, where bilateral mastopexy was performed in 52 percent. Thirty-seven percent of patients presented type III distortion, where bilateral reduction mammaplasty was performed in 67 percent. Thirty-five percent of patients presented complications, and most were minor. Conclusions: An algorithm based on breast size in relation to tumor location and extension of resection can be followed to determine the best approach to reconstruction. The authors` results have demonstrated that the complications were similar to those in other clinical series. Success depends on patient selection, coordinated planning with the oncologic surgeon, and careful intraoperative management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To evaluate the perception of eating practices and the stages of change among adolescents. Methods. Cross-sectional study involving a representative sample of 390 adolescents from 11 public schools in the city of Piracicaba, Brazil, in 2004. Food consumption was identified by a food frequency questionnaire and the perception of eating practices evaluation was conducted by comparing food consumption and individual classification of healthy aspects of the diet. The participants were classified within stages of change by means of a specific algorithm. A reclassification within new stages of change was proposed to identify adolescents with similar characteristics regarding food consumption and perception. Results. Low consumption of fruit and vegetables and high consumption of sweets and fats were identified. More than 44% of the adolescents had a mistaken perception of their diet. A significant relationship between the stages of change and food consumption was observed. The reclassification among stages of change, through including the pseudo-maintenance and non-reflective action stages was necessary, considering the high proportion of adolescents who erroneously classified their diets as healthy. Conclusion. Classification of the adolescents into stages of change, together with consumption and perception data, enabled identification of groups at risk, in accordance with their inadequate dietary habits and non-recognition of such habits. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates by means of joint time-frequency analysis that the acoustic noise produced by the breaking of biscuits is dependent on relative humidity and water activity. It also shows that the time-frequency coefficients calculated using the adaptive Gabor transformation algorithm is dependent on the period of time a biscuit is exposed to humidity. This is a new methodology that can be used to assess the crispness of crisp foods. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In diet-induced obesity, hypothalamic and systemic inflammatory factors trigger intracellular mechanisms that lead to resistance to the main adipostatic hormones, leptin and insulin. Tumor necrosis factor-alpha (TNF-alpha) is one of the main inflammatory factors produced during this process and its mechanistic role as an inducer of leptin and insulin resistance has been widely investigated. Most of TNF-alpha inflammatory signals are delivered by TNF receptor 1 (R1); however, the role played by this receptor in the context of obesity-associated inflammation is not completely known. Here, we show that TNFR1 knock-out (TNFR1 KO) mice are protected from diet-induced obesity due to increased thermogenesis. Under standard rodent chow or a high-fat diet, TNFR1 KO gain significantly less body mass despite increased caloric intake. Visceral adiposity and mean adipocyte diameter are reduced and blood concentrations of insulin and leptin are lower. Protection from hypothalamic leptin resistance is evidenced by increased leptin-induced suppression of food intake and preserved activation of leptin signal transduction through JAK2, STAT3, and FOXO1. Under the high-fat diet, TNFR1 KO mice present a significantly increased expression of the thermogenesis-related neurotransmitter, TRH. Further evidence of increased thermogenesis includes increased O(2) consumption in respirometry measurements, increased expressions of UCP1 and UCP3 in brown adipose tissue and skeletal muscle, respectively, and increased O(2) consumption by isolated skeletal muscle fiber mitochondria. This demonstrates that TNF-alpha signaling through TNFR1 is an important mechanism involved in obesity-associated defective thermogenesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Endothelin peptides have been shown to increase cholinergic neurotransmission in the airway. Genetic differences in airway responsiveness to methacholine where reported in mice. The present study compared the airway reactivity to methacholine in C57Bl/6 and BALB/c mice, the involvement of endothelin on this reactivity and endothelin levels in lung homogenates. Whole airway reactivity was analyzed by means of an isolated lung preparation where lungs were perfused through the trachea with warm gassed Krebs solution at 5 ml/min, and changes in perfusion pressure triggered by methacholine at increasing bolus doses (0.1-100 mu g) were recorded. We found that the maximal airway response to methacholine was much greater in C57Bl/6 than in BALB/c (Emax 34 +/- 2 vs 12 +/- 1 cmH(2)O, respectively). Bosentan (mixed endothelin A/B receptor antagonist; 10 mg/kg, i.p., 30 min before sacrifice) reduced lung responsiveness to methacholine in C57Bl/6 (58% at EC50 level) but had no effect in BALB/c mouse strain. This effect seems to be mediated by the endothelin ETA receptor since it was significantly reduced by the selective endothelin ETA receptor antagonist, BQ 123. Immunoreactive endothelin levels were higher in C57Bl/6 than in BALB/c lungs (43 5 vs 19 +/- 5 pg/g of tissue). In conclusion, airway reactivity to methacholine and lung endothelins content varies markedly between C57Bl/6 and BALB/c strains. Endothelins upregulate lung responsiveness to methacholine only in C57Bl/6, an effect achieved through the endothelin ETA receptor. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical algorithm for fully dynamical lubrication problems based on the Elrod-Adams formulation of the Reynolds equation with mass-conserving boundary conditions is described. A simple but effective relaxation scheme is used to update the solution maintaining the complementarity conditions on the variables that represent the pressure and fluid fraction. The equations of motion are discretized in time using Newmark`s scheme, and the dynamical variables are updated within the same relaxation process just mentioned. The good behavior of the proposed algorithm is illustrated in two examples: an oscillatory squeeze flow (for which the exact solution is available) and a dynamically loaded journal bearing. This article is accompanied by the ready-to-compile source code with the implementation of the proposed algorithm. [DOI: 10.1115/1.3142903]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The representation of interfaces by means of the algebraic moving-least-squares (AMLS) technique is addressed. This technique, in which the interface is represented by an unconnected set of points, is interesting for evolving fluid interfaces since there is]to surface connectivity. The position of the surface points can thus be updated without concerns about the quality of any surface triangulation. We introduce a novel AMLS technique especially designed for evolving-interfaces applications that we denote RAMLS (for Robust AMLS). The main advantages with respect to previous AMLS techniques are: increased robustness, computational efficiency, and being free of user-tuned parameters. Further, we propose a new front-tracking method based on the Lagrangian advection of the unconnected point set that defines the RAMLS surface. We assume that a background Eulerian grid is defined with some grid spacing h. The advection of the point set makes the surface evolve in time. The point cloud can be regenerated at any time (in particular, we regenerate it each time step) by intersecting the gridlines with the evolved surface, which guarantees that the density of points on the surface is always well balanced. The intersection algorithm is essentially a ray-tracing algorithm, well-studied in computer graphics, in which a line (ray) is traced so as to detect all intersections with a surface. Also, the tracing of each gridline is independent and can thus be performed in parallel. Several tests are reported assessing first the accuracy of the proposed RAMLS technique, and then of the front-tracking method based on it. Comparison with previous Eulerian, Lagrangian and hybrid techniques encourage further development of the proposed method for fluid mechanics applications. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The amount of textual information digitally stored is growing every day. However, our capability of processing and analyzing that information is not growing at the same pace. To overcome this limitation, it is important to develop semiautomatic processes to extract relevant knowledge from textual information, such as the text mining process. One of the main and most expensive stages of the text mining process is the text pre-processing stage, where the unstructured text should be transformed to structured format such as an attribute-value table. The stemming process, i.e. linguistics normalization, is usually used to find the attributes of this table. However, the stemming process is strongly dependent on the language in which the original textual information is given. Furthermore, for most languages, the stemming algorithms proposed in the literature are computationally expensive. In this work, several improvements of the well know Porter stemming algorithm for the Portuguese language, which explore the characteristics of this language, are proposed. Experimental results show that the proposed algorithm executes in far less time without affecting the quality of the generated stems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009