79 resultados para Linear decision rules
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The capacity to distinguish colony members from strangers is a key component in social life. In social insects, this extends to the brood and involves discrimination of queen eggs. Chemical substances communicate colony affiliation for both adults and brood; thus, in theory, all colony members should be able to recognize fellow nestmates. In this study, we investigate the ability of Dinoponera quadriceps workers to discriminate nestmate and non-nestmate eggs based on cuticular hydrocarbon composition. We analyzed whether cuticular hydrocarbons present on the eggs provide cues of discrimination. The results show that egg recognition in D. quadriceps is related to both age and the functional role of workers. Brood care workers were able to distinguish nestmate from non-nestmate eggs, while callow and forager workers were unable to do so.
Resumo:
Background: The present work aims at the application of the decision theory to radiological image quality control ( QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Methods: Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Results: Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. Conclusion: The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.
Resumo:
This paper presents new insights and novel algorithms for strategy selection in sequential decision making with partially ordered preferences; that is, where some strategies may be incomparable with respect to expected utility. We assume that incomparability amongst strategies is caused by indeterminacy/imprecision in probability values. We investigate six criteria for consequentialist strategy selection: Gamma-Maximin, Gamma-Maximax, Gamma-Maximix, Interval Dominance, Maximality and E-admissibility. We focus on the popular decision tree and influence diagram representations. Algorithms resort to linear/multilinear programming; we describe implementation and experiments. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper analyzes the convergence of the constant modulus algorithm (CMA) in a decision feedback equalizer using only a feedback filter. Several works had already observed that the CMA presented a better performance than decision directed algorithm in the adaptation of the decision feedback equalizer, but theoretical analysis always showed to be difficult specially due to the analytical difficulties presented by the constant modulus criterion. In this paper, we surmount such obstacle by using a recent result concerning the CM analysis, first obtained in a linear finite impulse response context with the objective of comparing its solutions to the ones obtained through the Wiener criterion. The theoretical analysis presented here confirms the robustness of the CMA when applied to the adaptation of the decision feedback equalizer and also defines a class of channels for which the algorithm will suffer from ill-convergence when initialized at the origin.
Resumo:
The economic occupation of an area of 500 ha for Piracicaba was studied with the irrigated cultures of maize, tomato, sugarcane and beans, having used models of deterministic linear programming and linear programming including risk for the Target-Motad model, where two situations had been analyzed. In the deterministic model the area was the restrictive factor and the water was not restrictive for none of the tested situations. For the first situation the gotten maximum income was of R$ 1,883,372.87 and for the second situation it was of R$ 1,821,772.40. In the model including risk a producer that accepts risk can in the first situation get the maximum income of R$ 1,883,372. 87 with a minimum risk of R$ 350 year(-1), and in the second situation R$ 1,821,772.40 with a minimum risk of R$ 40 year(-1). Already a producer averse to the risk can get in the first situation a maximum income of R$ 1,775,974.81 with null risk and for the second situation R$ 1.707.706, 26 with null risk, both without water restriction. These results stand out the importance of the inclusion of the risk in supplying alternative occupations to the producer, allowing to a producer taking of decision considered the risk aversion and the pretension of income.
Resumo:
Objective: To develop a model to predict the bleeding source and identify the cohort amongst patients with acute gastrointestinal bleeding (GIB) who require urgent intervention, including endoscopy. Patients with acute GIB, an unpredictable event, are most commonly evaluated and managed by non-gastroenterologists. Rapid and consistently reliable risk stratification of patients with acute GIB for urgent endoscopy may potentially improve outcomes amongst such patients by targeting scarce health-care resources to those who need it the most. Design and methods: Using ICD-9 codes for acute GIB, 189 patients with acute GIB and all. available data variables required to develop and test models were identified from a hospital medical records database. Data on 122 patients was utilized for development of the model and on 67 patients utilized to perform comparative analysis of the models. Clinical data such as presenting signs and symptoms, demographic data, presence of co-morbidities, laboratory data and corresponding endoscopic diagnosis and outcomes were collected. Clinical data and endoscopic diagnosis collected for each patient was utilized to retrospectively ascertain optimal management for each patient. Clinical presentations and corresponding treatment was utilized as training examples. Eight mathematical models including artificial neural network (ANN), support vector machine (SVM), k-nearest neighbor, linear discriminant analysis (LDA), shrunken centroid (SC), random forest (RF), logistic regression, and boosting were trained and tested. The performance of these models was compared using standard statistical analysis and ROC curves. Results: Overall the random forest model best predicted the source, need for resuscitation, and disposition with accuracies of approximately 80% or higher (accuracy for endoscopy was greater than 75%). The area under ROC curve for RF was greater than 0.85, indicating excellent performance by the random forest model Conclusion: While most mathematical models are effective as a decision support system for evaluation and management of patients with acute GIB, in our testing, the RF model consistently demonstrated the best performance. Amongst patients presenting with acute GIB, mathematical models may facilitate the identification of the source of GIB, need for intervention and allow optimization of care and healthcare resource allocation; these however require further validation. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
We use QCD sum rules to test the nature of the recently observed mesons Y(4260), Y(4350) and Y(4660), assumed to be exotic four-quark (c (c) over barq (q) over bar) or (c (c) over bars (s) over bar) states with J(PC)= 1(--). We work at leading order in alpha(s), consider the contributions of higher dimension condensates and keep terms which are linear in the strange quark mass m(s). We find for the (c (c) over bars (s) over bar) state a mass in m(Y) = (4.65 +/- 0.10) GeV which is compatible with the experimental candidate Y (4660), while for the (c (c) over barq (q) over bar) state we find a mass in m(Y) = (4.49 +/- 0.11) GeV, which is still consistent with the mass of the experimental candidate Y(4350). With the tetraquark structure we are working we cannot explain the Y(4260) as a tetraquark state. We also consider molecular D(s0)(D) over bar (s)* and D(0)(D) over bar* states. For the D(s0)(D) over bar (s)* molecular state we get m(Ds0 (D) over bars*) = (4.42 +/- 0.10) GeV which is consistent, considering the errors, with the mass of the meson Y(4350) and for the D(0)(D) over bar* molecular state we get m(D0 (D) over bar*) = (4.27 +/- 0.10) GeV in excellent agreement with the mass of the meson Y(4260). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F) were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05) and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm), mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm), polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm) and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm). All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.
Resumo:
The purpose of this study was to evaluate the metal-ceramic bond strength (MCBS) of 6 metal-ceramic pairs (2 Ni-Cr alloys and 1 Pd-Ag alloy with 2 dental ceramics) and correlate the MCBS values with the differences between the coefficients of linear thermal expansion (CTEs) of the metals and ceramics. Verabond (VB) Ni-Cr-Be alloy, Verabond II (VB2), Ni-Cr alloy, Pors-on 4 (P), Pd-Ag alloy, and IPS (I) and Duceram (D) ceramics were used for the MCBS test and dilatometric test. Forty-eight ceramic rings were built around metallic rods (3.0 mm in diameter and 70.0 mm in length) made from the evaluated alloys. The rods were subsequently embedded in gypsum cast in order to perform a tensile load test, which enabled calculating the CMBS. Five specimens (2.0 mm in diameter and 12.0 mm in length) of each material were made for the dilatometric test. The chromel-alumel thermocouple required for the test was welded into the metal test specimens and inserted into the ceramics. ANOVA and Tukey's test revealed significant differences (p=0.01) for the MCBS test results (MPa), with PI showing higher MCBS (67.72) than the other pairs, which did not present any significant differences. The CTE (10-6 oC-1) differences were: VBI (0.54), VBD (1.33), VB2I (-0.14), VB2D (0.63), PI (1.84) and PD (2.62). Pearson's correlation test (r=0.17) was performed to evaluate of correlation between MCBS and CTE differences. Within the limitations of this study and based on the obtained results, there was no correlation between MCBS and CTE differences for the evaluated metal-ceramic pairs.
Resumo:
Fenômenos oscilatórios e ressonantes são explorados em vários cursos experimentais de física. Em geral os experimentos são interpretados no limite de pequenas oscilações e campos uniformes. Neste artigo descrevemos um experimento de baixo custo para o estudo da ressonância em campo magnético da agulha de uma bússola fora dos limites acima. Nesse caso, termos não lineares na equação diferencial são responsáveis por fenômenos interessantes de serem explorados em laboratórios didáticos.
Resumo:
PURPOSE: To analyze the usefulness of the weight gain/height gain ratio from birth to two and three years of age as a predictive risk indicator of excess weight at preschool age. METHODS: The weight and height/length of 409 preschool children at daycare centers were measured according to internationally recommended rules. The weight values and body mass indices of the children were transformed into a z-score per the standard method described by the World Health Organization. The Pearson correlation coefficients (rP) and the linear regressions between the anthropometric parameters and the body mass index z-scores of preschool children were statistically analyzed (alpha = 0.05). RESULTS: The mean age of the study population was 3.2 years (± 0.3 years). The prevalence of excess weight was 28.8%, and the prevalence of overweight and obesity was 8.8%. The correlation coefficients between the body mass index z-scores of the preschool children and the birth weights or body mass indices at birth were low (0.09 and 0.10, respectively). There was a high correlation coefficient (rP = 0.79) between the mean monthly gain of weight and the body mass index z-score of preschool children. A higher coefficient (rP = 0.93) was observed between the ratio of the mean weight gain per height gain (g/cm) and the preschool children body mass index z-score. The coefficients and their differences were statistically significant. CONCLUSION: Regardless of weight or length at birth, the mean ratio between the weight gain per g/cm of height growth from birth presented a strong correlation with the body mass index of preschool children. These results suggest that this ratio may be a good indicator of the risk of excess weight and obesity in preschool-aged children.
Resumo:
This work presents a fully non-linear finite element formulation for shell analysis comprising linear strain variation along the thickness of the shell and geometrically exact description for curved triangular elements. The developed formulation assumes positions and generalized unconstrained vectors as the variables of the problem, not displacements and finite rotations. The full 3D Saint-Venant-Kirchhoff constitutive relation is adopted and, to avoid locking, the rate of thickness variation enhancement is introduced. As a consequence, the second Piola-Kirchhoff stress tensor and the Green strain measure are employed to derive the specific strain energy potential. Curved triangular elements with cubic approximation are adopted using simple notation. Selected numerical simulations illustrate and confirm the objectivity, accuracy, path independence and applicability of the proposed technique.
Resumo:
The implementation of confidential contracts between a container liner carrier and its customers, because of the Ocean Shipping Reform Act (OSRA) 1998, demands a revision in the methodology applied in the carrier's planning of marketing and sales. The marketing and sales planning process should be more scientific and with a better use of operational research tools considering the selection of the customers under contracts, the duration of the contracts, the freight, and the container imbalances of these contracts are basic factors for the carrier's yield. This work aims to develop a decision support system based on a linear programming model to generate the business plan for a container liner carrier, maximizing the contribution margin of its freight.