110 resultados para Multi-objective algorithm
Resumo:
The present work had as objective uses a model of lineal programming algorithm to optimize the use of the water in the District of Irrigation Baixo Acarau-CE proposing the best combination of crop types and areas established of 8,0 ha. The model aim maximize the net benefit of small farmer, incorporating the constraints in water and land availability, and constraints on the market. Considering crop types and the constraints, the study lead to the following conclusions: 1. The water availability in the District was not a limiting resources, while all available land was assigned in six of the seven cultivation plans analyzed. Furthermore, water availability was a restrictive factor as compared with land only when its availability was made to reduce to 60% of its actual value; 2. The combination of soursop and melon plants was the one that presented the largest net benefit, corresponding to R$ 5,250.00/ha/yr. The planting area for each crop made up to 50% of the area of the plot; 3. The plan that suggests the substitution of the cultivation of the soursop, since a decrease in annual net revenue of 5.87%. However, the plan that contemplates the simultaneous substitution of both soursop and melon produced the lowest liquid revenue, with reduction of 33.8%.
Resumo:
Objective: To determine whether information from genetic risk variants for diabetes is associated with cardiovascular events incidence. Methods: From the about 30 known genes associated with diabetes, we genotyped single-nucleotide polymorphisms at the 10 loci most associated with type-2 diabetes in 425 subjects from the MASS-II Study, a randomized study in patients with multi-vessel coronary artery disease. The combined genetic information was evaluated by number of risk alleles for diabetes. Performance of genetic models relative to major cardiovascular events incidence was analyzed through Kaplan-Meier curve comparison and Cox Hazard Models and the discriminatory ability of models was assessed for cardiovascular events by calculating the area under the ROC curve. Results: Genetic information was able to predict 5-year incidence of major cardiovascular events and overall-mortality in non-diabetic individuals, even after adjustment for potential confounders including fasting glycemia. Non-diabetic individuals with high genetic risk had a similar incidence of events then diabetic individuals (cumulative hazard of 33.0 versus 35.1% of diabetic subjects). The addition of combined genetic information to clinical predictors significantly improved the AUC for cardiovascular events incidence (AUC = 0.641 versus 0.610). Conclusions: Combined information of genetic variants for diabetes risk is associated to major cardiovascular events incidence, including overall mortality, in non-diabetic individuals with coronary artery disease.
Resumo:
Mature weight breeding values were estimated using a multi-trait animal model (MM) and a random regression animal model (RRM). Data consisted of 82 064 weight records from 8 145 animals, recorded from birth to eight years of age. Weights at standard ages were considered in the MM. All models included contemporary groups as fixed effects, and age of dam (linear and quadratic effects) and animal age as covariates. In the RRM, mean trends were modelled through a cubic regression on orthogonal polynomials of animal age and genetic maternal and direct and maternal permanent environmental effects were also included as random. Legendre polynomials of orders 4, 3, 6 and 3 were used for animal and maternal genetic and permanent environmental effects, respectively, considering five classes of residual variances. Mature weight (five years) direct heritability estimates were 0.35 (MM) and 0.38 (RRM). Rank correlation between sires' breeding values estimated by MM and RRM was 0.82. However, selecting the top 2% (12) or 10% (62) of the young sires based on the MM predicted breeding values, respectively 71% and 80% of the same sires would be selected if RRM estimates were used instead. The RRM modelled the changes in the (co) variances with age adequately and larger breeding value accuracies can be expected using this model.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.
Resumo:
Context. The Abell 222 and 223 clusters are located at an average redshift z similar to 0.21 and are separated by 0.26 deg. Signatures of mergers have been previously found in these clusters, both in X-rays and at optical wavelengths, thus motivating our study. In X-rays, they are relatively bright, and Abell 223 shows a double structure. A filament has also been detected between the clusters both at optical and X-ray wavelengths. Aims. We analyse the optical properties of these two clusters based on deep imaging in two bands, derive their galaxy luminosity functions (GLFs) and correlate these properties with X-ray characteristics derived from XMM-Newton data. Methods. The optical part of our study is based on archive images obtained with the CFHT Megaprime/Megacam camera, covering a total region of about 1 deg(2), or 12.3 x 12.3 Mpc(2) at a redshift of 0.21. The X-ray analysis is based on archive XMM-Newton images. Results. The GLFs of Abell 222 in the g' and r' bands are well fit by a Schechter function; the GLF is steeper in r' than in g'. For Abell 223, the GLFs in both bands require a second component at bright magnitudes, added to a Schechter function; they are similar in both bands. The Serna & Gerbal method allows to separate well the two clusters. No obvious filamentary structures are detected at very large scales around the clusters, but a third cluster at the same redshift, Abell 209, is located at a projected distance of 19.2 Mpc. X-ray temperature and metallicity maps reveal that the temperature and metallicity of the X-ray gas are quite homogeneous in Abell 222, while they are very perturbed in Abell 223. Conclusions. The Abell 222/Abell 223 system is complex. The two clusters that form this structure present very different dynamical states. Abell 222 is a smaller, less massive and almost isothermal cluster. On the other hand, Abell 223 is more massive and has most probably been crossed by a subcluster on its way to the northeast. As a consequence, the temperature distribution is very inhomogeneous. Signs of recent interactions are also detected in the optical data where this cluster shows a ""perturbed"" GLF. In summary, the multiwavelength analyses of Abell 222 and Abell 223 are used to investigate the connection between the ICM and the cluster galaxy properties in an interacting system.
Resumo:
Multispectral widefield optical imaging has the potential to improve early detection of oral cancer. The appropriate selection of illumination and collection conditions is required to maximize diagnostic ability. The goals of this study were to (i) evaluate image contrast between oral cancer/precancer and non-neoplastic mucosa for a variety of imaging modalities and illumination/collection conditions, and (ii) use classification algorithms to evaluate and compare the diagnostic utility of these modalities to discriminate cancers and precancers from normal tissue. Narrowband reflectance, autofluorescence, and polarized reflectance images were obtained from 61 patients and 11 normal volunteers. Image contrast was compared to identify modalities and conditions yielding greatest contrast. Image features were extracted and used to train and evaluate classification algorithms to discriminate tissue as non-neoplastic, dysplastic, or cancer; results were compared to histologic diagnosis. Autofluorescence imaging at 405-nm excitation provided the greatest image contrast, and the ratio of red-to-green fluorescence intensity computed from these images provided the best classification of dysplasia/cancer versus non-neoplastic tissue. A sensitivity of 100% and a specificity of 85% were achieved in the validation set. Multispectral widefield images can accurately distinguish neoplastic and non-neoplastic tissue; however, the ability to separate precancerous lesions from cancers with this technique was limited. (C) 2010 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.3516593]
Resumo:
We review recent developments in manifold components and the introduction of light-emitting-diode technology in spectroscopic detection in order to evaluate the tremendous possibilities offered by multi-commutation for infield and in-situ measurements, based on the use of multi-pumping and low-voltage, portable batteries, which make possible a dramatic reduction in size, weight and power requirements of spectrometric devices. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Multi-pumping flow systems exploit pulsed flows delivered by Solenoid pumps. Their improved performance rely on the enhanced radial mass transport inherent to the pulsed flow, which is a consequence of the establishment of vortices thus a tendency towards turbulent mixing. This paper presents several evidences of turbulent mixing in relation to pulsed flows. such as recorded peak shape, establishment of fluidized beds, exploitation of flow reversal, implementation of relatively slow chemical reactions and/or heating of the reaction medium. In addition, Reynolds number associated with the GO period of a pulsed flow is estimated and photographic images of dispersing samples flowing under laminar regime and pulsed flow conditions are presented. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Traditionally, chronotype classification is based on the Morningness-Eveningness Questionnaire (MEQ). It is implicit in the classification that intermediate individuals get intermediate scores to most of the MEQ questions. However, a small group of individuals has a different pattern of answers. In some questions, they answer as ""morning-types"" and in some others they answer as ""evening-types,"" resulting in an intermediate total score. ""Evening-type"" and ""Morning-type"" answers were set as A(1) and A(4), respectively. Intermediate answers were set as A(2) and A(3). The following algorithm was applied: Bimodality Index = (Sigma A(1) x Sigma A(4))(2) - (Sigma A(2) x Sigma A(3))(2). Neither-types that had positive bimodality scores were classified as bimodal. If our hypothesis is validated by objective data, an update of chronotype classification will be required. (Author correspondence: brunojm@ymail.com)
Resumo:
Purpose Adverse drug events (ADEs) are harmful and occur with alarming frequency in critically ill patients. Complex pharmacotherapy with multiple medications increases the probability of a drug interaction (DI) and ADEs in patients in intensive care units (ICUs). The objective of the study is to determine the frequency of ADEs among patients in the ICU of a university hospital and the drugs implicated. Also, factors associated with ADEs are investigated. Methods This cross-sectional study investigated 299 medical records of patients hospitalized for 5 or more days in an ICU. ADEs were identified through intensive monitoring adopted in hospital pharmacovigilance and also ADE triggers. Adverse drug reactions (ADR) causality was classified using the Naranjo algorithm. Data were analyzed through descriptive analysis, and through univariate and multiple logistic regression. Results The most frequent ADEs were ADRs type A, of possible causality and moderate severity. The most frequent ADR was drug-induced acute kidney injury. Patients with ADEs related to DIs corresponded to 7% of the sample. The multiple logistic regression showed that length of hospitalization (OR = 1.06) and administration of cardiovascular drugs (OR = 2.2) were associated with the occurrence of ADEs. Conclusion Adverse drug reactions of clinical significance were the most frequent ADEs in the ICU studied, which reduces patient safety. The number of ADEs related to drug interactions was small, suggesting that clinical manifestations of drug interactions that harm patients are not frequent in ICUs.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The general flowshop scheduling problem is a production problem where a set of n jobs have to be processed with identical flow pattern on in machines. In permutation flowshops the sequence of jobs is the same on all machines. A significant research effort has been devoted for sequencing jobs in a flowshop minimizing the makespan. This paper describes the application of a Constructive Genetic Algorithm (CGA) to makespan minimization on flowshop scheduling. The CGA was proposed recently as an alternative to traditional GA approaches, particularly, for evaluating schemata directly. The population initially formed only by schemata, evolves controlled by recombination to a population of well-adapted structures (schemata instantiation). The CGA implemented is based on the NEH classic heuristic and a local search heuristic used to define the fitness functions. The parameters of the CGA are calibrated using a Design of Experiments (DOE) approach. The computational results are compared against some other successful algorithms from the literature on Taillard`s well-known standard benchmark. The computational experience shows that this innovative CGA approach provides competitive results for flowshop scheduling; problems. (C) 2007 Elsevier Ltd. All rights reserved.