909 resultados para classification and regression trees
Resumo:
Pinus caribaea var. hondurensis (Sénécl) Barr. & Golf. is a tropical pine that naturally occurs in lowland areas of Belize, El Salvador, Guatemala, Honduras, Nicaragua, and eastern Mexico. It has been one of the most studied tropical pines and the one with the most commercial importance in Brazil. The objective of this work was to select the best provenances for plantations and best trees in families for the establishment of seed orchards. For that a trial with five provenances and 47 open-pollinated families was planted near Planaltina, Federal District, in the Cerrado Region of Brazil. The provenances tested were Poptun (Guatemala), Gualjoco, Los Limones, El Porvenir and Santa Cruz de Yojoa (Honduras) and assessed at 12 years of age. Poptun and Gualjoco had larger volume, and Los Limones and El Porvenir the lowest incidence of forks and foxtails. Individual tree heritabilities for volume, stem form and branch diameter were 0.34, 0.06, and 0.26 respectively. More than 90% of the trees had defects, common in unimproved P. caribaea. Selection criteria for quality traits need to be relaxed in the first generation of breeding to allow for larger genetic gains in productivity. Results from this test compared with P. caribaea var. hondurensis trials in other Brazilian, Colombian and Venezuelan sites suggest that provenance x site and family x site interactions are not as strong as in other pine species.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
A precise classification and an optimal understanding of tibial plateau fractures are the basis of a conservative treatment or adequate surgery. The aim of this prospective study is to determine the contribution of 3D CT to the classification of fractures (comparison with standard X-rays) and as an aid to the surgeon in preoperative planning and surgical reconstruction. Between November 1994 and July 1996, 20 patients presenting 22 tibial plateau fractures were considered in this study. They all underwent surgical treatment. The fractures were classified according to the Müller AO classification. They were all investigated by means of standard X-rays (AP, profile, oblique) and the 3D CT. Analysis of the results has shown the superiority of 3D CT in the planning (easier and more acute), in the classification (more precise), and in the exact assessment of the lesions (quantity of fragments); thereby proving to be of undeniable value of the surgeon.
Resumo:
BACKGROUND AND PURPOSE: Previous studies have postulated that poststroke depression (PSD) might be related to cumulative vascular brain pathology rather than to the location and severity of a single macroinfarct. We performed a detailed analysis of all types of microvascular lesions and lacunes in 41 prospectively documented and consecutively autopsied stroke cases. METHODS: Only cases with first-onset depression <2 years after stroke were considered as PSD in the present series. Diagnosis of depression was established prospectively using DSM-IV criteria for major depression. Neuropathological evaluation included bilateral semiquantitative assessment of microvascular ischemic pathology and lacunes; statistical analysis included Fisher exact test, Mann-Whitney U test, and regression models. RESULTS: Macroinfarct site was not related to the occurrence of PSD for any of the locations studied. Thalamic and basal ganglia lacunes occurred significantly more often in PSD cases. Higher lacune scores in basal ganglia, thalamus, and deep white matter were associated with an increased PSD risk. In contrast, microinfarct and diffuse or periventricular demyelination scores were not increased in PSD. The combined lacune score (thalamic plus basal ganglia plus deep white matter) explained 25% of the variability of PSD occurrence. CONCLUSIONS: The cumulative vascular burden resulting from chronic accumulation of lacunar infarcts within the thalamus, basal ganglia, and deep white matter may be more important than single infarcts in the prediction of PSD.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
This study aimed to establish relationships between maize yield and rainfall on different temporal and spatial scales, in order to provide a basis for crop monitoring and modelling. A 16-year series of maize yield and daily rainfall from 11 municipalities and micro-regions of Rio Grande do Sul State was used. Correlation and regression analyses were used to determine associations between crop yield and rainfall for the entire crop cycle, from tasseling to 30 days after, and from 5 days before tasseling to 40 days after. Close relationships between maize yield and rainfall were found, particularly during the reproductive period (45-day period comprising the flowering and grain filling). Relationships were closer on a regional scale than at smaller scales. Implications of the crop-rainfall relationships for crop modelling are discussed.
Resumo:
The objective of this work was to estimate the repeatability of adaptability and stability parameters of common bean between years, within each biennium from 2003 to 2012, in Minas Gerais state, Brazil. Grain yield data from trials of value for cultivation and use common bean were analyzed. Grain yield, ecovalence, regression coefficient, and coefficient of determination were estimated considering location and sowing season per year, within each biennium. Subsequently, a analysis of variance these estimates was carried out, and repeatability was estimated in the biennia. Repeatability estimate for grain yield in most of the biennia was relatively high, but for ecovalence and regression coefficient it was null or of small magnitude, which indicates that confidence on identification of common bean lines for recommendation is greater when using means of yield, instead of stability parameters.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
BACKGROUND: The impact of the Integrated Management of Childhood Illness (IMCI) strategy has been less than anticipated because of poor uptake. Electronic algorithms have the potential to improve quality of health care in children. However, feasibility studies about the use of electronic protocols on mobile devices over time are limited. This study investigated constraining as well as facilitating factors that influence the uptake of a new electronic Algorithm for Management of Childhood Illness (ALMANACH) among primary health workers in Dar es Salaam, Tanzania. METHODS: A qualitative approach was applied using in-depth interviews and focus group discussions with altogether 40 primary health care workers from 6 public primary health facilities in the three municipalities of Dar es Salaam, Tanzania. Health worker's perceptions related to factors facilitating or constraining the uptake of the electronic ALMANACH were identified. RESULTS: In general, the ALMANACH was assessed positively. The majority of the respondents felt comfortable to use the devices and stated that patient's trust was not affected. Most health workers said that the ALMANACH simplified their work, reduced antibiotic prescription and gave correct classification and treatment for common causes of childhood illnesses. Few HWs reported technical challenges using the devices and complained about having had difficulties in typing. Majority of the respondents stated that the devices increased the consultation duration compared to routine practice. In addition, health system barriers such as lack of staff, lack of medicine and lack of financial motivation were identified as key reasons for the low uptake of the devices. CONCLUSIONS: The ALMANACH built on electronic devices was perceived to be a powerful and useful tool. However, health system challenges influenced the uptake of the devices in the selected health facilities.
Resumo:
The study evaluated the leaf nutritional levels of peach and nectarine trees under subtropical climate in order to improve the fertilization practices. The experiment was carried out in São Paulo state University, Botucatu, São Paulo State, Brazil. The experimental design consisted of subdivided plots, in which plots corresponded to cultivars and subplots to the leaf sample periods. The evaluated peach cultivars were: Marli, Turmalina, Precocinho, Jubileu, Cascata 968, Cascata 848, CP 951C, CP 9553CYN, and Tropic Beauty, and that of nectarine was 'Sun Blaze'. The sample periods were: after harvest, plants in vegetative period; dormancy; beginning of flowering and fruiting (standard sample). Results indicated significant variations in the levels of N, P, K, Ca, Mg, S, B, Cu, Fe, Mn and Zn for the sampling period and in N, Ca, Mg, S, B, Fe and Mn levels for the cultivars.
Resumo:
Plant growth regulators and biostimulants have been used as an agronomic technique to optimize the production of seedlings in various crops. This study aimed to evaluate the influence of gibberellic acid and the biostimulant Stimulate® on the initial growth of tamarind (Tamarindus indica L.). The experiments were conducted in a nursery with 50% shading, in a randomized block design with five replications and five plants per plot. Thirty eight days after sowing, the leaves were sprayed seven times a day with 0.0 (control), 0.8, 1.6, 2.4 and 3.2 mL of gibberellic acid L-1 aqueous solution and with 0.0 (control), 6.0,12.0, 18.0, and 24.0 mL Stimulate® L-1 aqueous solution. Stem diameter (SD), plant height (PH), longest root length (LRL), shoot dry mass (SDM), root dry mass (RDM) and RDM:SDM ratio were evaluated ninety days after sowing. Variance and regression analysis showed that GA3 at 4% promoted plant growth (height), but had no significant effect on stem diameter, longest root length, shoot and root dry mass and the RDM:SDM ratio. On the other hand, all concentrations of Stimulate® significantly increased plant height and shoot and root dry mass of tamarind seedlings.
Resumo:
Mergers and acquisitions (M&A) have played very important role in restructuring the pulp and paper industry (PPI). The poor performance and fragmented nature of the industry, overcapacity problems, and globalisation have driven companies to consolidate. The objective of this thesis was to examine how PPI acquirers’ have performed subsequent M&As and whether the deal characteristics have had any impact on performance. Based on the results it seems that PPI companies have not been able to enhance their performance in the long run after M&As although the per-formance of acquiring firms has remained above the industry median, and deal characteristics or the amount of premiums paid do not seem to have had any effect. The statistical significance of the results was tested with change model and regression analysis. Performance was assessed with accrual, cash flow, and market based indicators. Results are congruent with behavioural theory: managers and investors seem to be overoptimistic in determining the synergies from M&As.
Resumo:
PURPOSE: The concept of resilience is gaining increasing importance as a key component of supportive care but to date has rarely been addressed in studies with adult cancer patients. The purpose of our study was to describe resilience and its potential predictors and supportive care needs in cancer patients during early treatment and to explore associations between both concepts. METHODS: This descriptive study included adult cancer patients under treatment in ambulatory cancer services of a Swiss hospital. Subjects completed the 25-item Connor-Davidson-Resilience Scale and the 34-item Supportive Care Needs Survey. Descriptive, correlational and regression analysis were performed. RESULTS: 68 patients with cancer were included in the study. Compared to general population, resilience scores were significantly lower (74.4 ± 12.6 vs. 80.4 ± 12.8, p = .0002). Multiple regression analysis showed predictors ("age", "metastasis", "recurrence" and "living alone") of resilience (adjusted R2 = .19, p < .001). Highest unmet needs were observed in the domain of psychological needs. Lower resilience scores were significantly and strongly associated with higher levels of unmet psychological needs (Rho = -.68, p < .001), supportive care needs (Rho = -.49, p < .001) and information needs (Rho = -.42, p = .001). CONCLUSION: Ambulatory patients with higher levels of resilience express fewer unmet needs. Further work is needed to elucidate the mechanism of the observed relationships and if interventions facilitating resilience have a positive effect on unmet needs.
Resumo:
Drying is a major step in the manufacturing process in pharmaceutical industries, and the selection of dryer and operating conditions are sometimes a bottleneck. In spite of difficulties, the bottlenecks are taken care of with utmost care due to good manufacturing practices (GMP) and industries' image in the global market. The purpose of this work is to research the use of existing knowledge for the selection of dryer and its operating conditions for drying of pharmaceutical materials with the help of methods like case-based reasoning and decision trees to reduce time and expenditure for research. The work consisted of two major parts as follows: Literature survey on the theories of spray dying, case-based reasoning and decision trees; working part includes data acquisition and testing of the models based on existing and upgraded data. Testing resulted in a combination of two models, case-based reasoning and decision trees, leading to more specific results when compared to conventional methods.