961 resultados para multiclass classification problems
Resumo:
Soil science has sought to develop better techniques for the classification of soils, one of which is the use of remote sensing applications. The use of ground sensors to obtain soil spectral data has enabled the characterization of these data and the advancement of techniques for the quantification of soil attributes. In order to do this, the creation of a soil spectral library is necessary. A spectral library should be representative of the variability of the soils in a region. The objective of this study was to create a spectral library of distinct soils from several agricultural regions of Brazil. Spectral data were collected (using a Fieldspec sensor, 350-2,500 nm) for the horizons of 223 soil profiles from the regions of Matão, Paraguaçu Paulista, Andradina, Ipaussu, Mirandópolis, Piracicaba, São Carlos, Araraquara, Guararapes, Valparaíso (SP); Naviraí, Maracajú, Rio Brilhante, Três Lagoas (MS); Goianésia (GO); and Uberaba and Lagoa da Prata (MG). A Principal Component Analysis (PCA) of the data was then performed and a graphic representation of the spectral curve was created for each profile. The reflectance intensity of the curves was principally influenced by the levels of Fe2O3, clay, organic matter and the presence of opaque minerals. There was no change in the spectral curves in the horizons of the Latossolos, Nitossolos, and Neossolos Quartzarênicos. Argissolos had superficial horizon curves with the greatest intensity of reflection above 2,200 nm. Cambissolos and Neossolos Litólicos had curves with greater reflectance intensity in poorly developed horizons. Gleisols showed a convex curve in the region of 350-400 nm. The PCA was able to separate different data collection areas according to the region of source material. Principal component one (PC1) was correlated with the intensity of reflectance samples and PC2 with the slope between the visible and infrared samples. The use of the Spectral Library as an indicator of possible soil classes proved to be an important tool in profile classification.
Resumo:
The most advanced stage of water erosion, the gully, represents severe problems in different contexts, both in rural and urban environments. In the search for a stabilization of the process in a viable manner it is of utmost importance to assess the efficiency of evaluation methodologies. For this purpose, the efficiency of low-cost conservation practices were tested for the reduction of soil and nutrient losses caused by erosion from gullies in Pinheiral, state of Rio de Janeiro. The following areas were studied: gully recovered by means of physical and biological strategies; gullies in recovering stage, by means of physical strategies only, and gullies under no restoration treatment. During the summer of 2005/2006, the following data sets were collected for this study: soil classification of each of the eroded gully areas; planimetric and altimetric survey; determination of rain erosivity indexes; determination of amount of soil sediment; sediment grain size characteristics; natural amounts of nutrients Ca, Mg, K and P, as well as total C and N concentrations. The results for the three first measurements were 52.5, 20.5, and 29.0 Mg in the sediments from the gully without intervention, and of 1.0, 1.7 and 1.8 Mg from the gully with physical interventions, indicating an average reduction of 95 %. The fully recovered gully produced no sediment during the period. The data of total nutrient loss from the three gullies under investigation showed reductions of 98 % for the recovering gully, and 99 % for the fully recovered one. As for the loss of nutrients, the data indicate a nutrient loss of 1,811 kg from for the non-treated gully. The use of physical and biological interventions made it possible to reduce overall nutrient loss by more than 96 %, over the entire rainy season, as compared to the non-treated gully. Results show that the methods used were effective in reducing soil and nutrient losses from gullies.
Resumo:
[eng] In the context of cooperative TU-games, and given an order of players, we consider the problem of distributing the worth of the grand coalition as a sequentia decision problem. In each step of process, upper and lower bounds for the payoff of the players are required related to successive reduced games. Sequentially compatible payoffs are defined as those allocation vectors that meet these recursive bounds. The core of the game is reinterpreted as a set of sequentally compatible payoffs when the Davis-Maschler reduced game is considered (Th.1). Independently of the reduction, the core turns out to be the intersections of the family of the sets of sequentially compatible payoffs corresponding to the different possible orderings (Th.2), so it is in some sense order-independent. Finally, we analyze advantagenous properties for the first player
Resumo:
Two trends which presently exist in relation to the concept of Paleontology are analyzed, pointing out some of the aspects which negative influence. Various reflections are made based on examples of some of the principal points of paleontological method, such as the influence of a punctual sampling, the meaning of size-frequency distribution and subjectivity in the identification of fossils. Topics which have a marked repercussion in diverse aspects of Paleontology are discussed.
Resumo:
In the upper Jequitinhonha valley, state of Minas Gerais, Brazi, there are large plane areas known as "chapadas", which are separated by areas dissected by tributaries of the Jequitinhonha and Araçuaí rivers. These dissected areas have a surface drainage system with tree, shrub, and grass vegetation, more commonly known as "veredas", i.e., palm swamps. The main purpose of this study was to characterize soil physical, chemical and morphological properties of a representative toposequence in the watershed of the Vereda Lagoa do Leandro, a swamp near Minas Novas, MG, on "chapadas", the highlands of the Alto Jequitinhonha region Different soil types are observed in the landscape: at the top - Typic Haplustox (LVA), in the middle slope - Xanthic Haplustox (LA), at the footslope - Xanthic Haplustox, gray color, here called "Gray Haplustox" ("LAC") and, at the bottom of the palm swamp - Typic Albaquult (GXbd). These soils were first morphologically described; samples of disturbed and undisturbed soils were collected from all horizons and subhorizons, to evaluate their essential physical and chemical properties, by means of standard determination of Fe, Al, Mn, Ti and Si oxides after sulfuric extraction. The contents of Fe, Al and Mn, extracted with dithionite-citrate-bicarbonate and oxalate treatments, were also determined. In the well-drained soils of the slope positions, the typical morphological, physical and chemical properties of Oxisols were found. The GXbd sample, from the bottom of the palm swamp, is grayish and has high texture gradient (B/A) and massive structure. The reduction of the proportion of crystalline iron compounds and the low crystallinity along the slope confirmed the loss of iron during pedogenesis, which is reflected in the current soil color. The Si and Al contents were lowest in the "LAC" soil. There was a decrease of the Fe2O3/TiO2 ratio downhill, indicating progressive drainage restriction along the toposequence. The genesis and all physical and chemical properties of the soils at the footslope and the bottom of the palm swamp of the "chapadas" of the Alto Jequitinhonha region are strongly influenced by the occurrence of ground water on the surface or near the surface all year long, at present and/or in the past. Total concentrations of iron oxides, Fe d and Fe o in soils of the toposequence studied are related to the past and/or present soil colors and drainage conditions.
Resumo:
This is an overview of the United States Commission on Civil Rights and the duties of the Iowa State Advisory Committee. It contains a report concerning problems of the Spanish surnamed and migrant population in Iowa.
Resumo:
La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.
Resumo:
We prove for any pure three-quantum-bit state the existence of local bases which allow one to build a set of five orthogonal product states in terms of which the state can be written in a unique form. This leads to a canonical form which generalizes the two-quantum-bit Schmidt decomposition. It is uniquely characterized by the five entanglement parameters. It leads to a complete classification of the three-quantum-bit states. It shows that the right outcome of an adequate local measurement always erases all entanglement between the other two parties.
Resumo:
BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies. SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.
Resumo:
Among the soils in the Mato Grosso do Sul, stand out in the Pantanal biome, the Spodosols. Despite being recorded in considerable extensions, few studies aiming to characterize and classify these soils were performed. The purpose of this study was to characterize and classify soils in three areas of two physiographic types in the Taquari river basin: bay and flooded fields. Two trenches were opened in the bay area (P1 and P2) and two in the flooded field (P3 and P4). The third area (saline) with high sodium levels was sampled for further studies. In the soils in both areas the sand fraction was predominant and the texture from sand to sandy loam, with the main constituent quartz. In the bay area, the soil organic carbon in the surface layer (P1) was (OC) > 80 g kg-1, being diagnosed as Histic epipedon. In the other profiles the surface horizons had low OC levels which, associated with other properties, classified them as Ochric epipedons. In the soils of the bay area (P1 and P2), the pH ranged from 5.0 to 7.5, associated with dominance of Ca2+ and Mg2+, with base saturation above 50 % in some horizons. In the flooded fields (P3 and P4) the soil pH ranged from 4.9 to 5.9, H+ contents were high in the surface horizons (0.8-10.5 cmol c kg-1 ), Ca2+ and Mg² contents ranged from 0.4 to 0.8 cmol c kg-1 and base saturation was < 50 %. In the soils of the bay area (P1 and P2) iron was accumulated (extracted by dithionite - Fed) and OC in the spodic horizon; in the P3 and P4 soils only Fed was accumulated (in the subsurface layers). According to the criteria adopted by the Brazilian System of Soil Classification (SiBCS) at the subgroup level, the soils were classified as: P1: Organic Hydromorphic Ferrohumiluvic Spodosol. P2: Typical Orthic Ferrohumiluvic Spodosol. P3: Typical Hydromorphic Ferroluvic Spodosol. P4: Arenic Orthic Ferroluvic Spodosol.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.
Resumo:
Colorectal cancer (CRC) is a major cause of cancer mortality. Whereas some patients respond well to therapy, others do not, and thus more precise, individualized treatment strategies are needed. To that end, we analyzed gene expression profiles from 1,290 CRC tumors using consensus-based unsupervised clustering. The resultant clusters were then associated with therapeutic response data to the epidermal growth factor receptor-targeted drug cetuximab in 80 patients. The results of these studies define six clinically relevant CRC subtypes. Each subtype shares similarities to distinct cell types within the normal colon crypt and shows differing degrees of 'stemness' and Wnt signaling. Subtype-specific gene signatures are proposed to identify these subtypes. Three subtypes have markedly better disease-free survival (DFS) after surgical resection, suggesting these patients might be spared from the adverse effects of chemotherapy when they have localized disease. One of these three subtypes, identified by filamin A expression, does not respond to cetuximab but may respond to cMET receptor tyrosine kinase inhibitors in the metastatic setting. Two other subtypes, with poor and intermediate DFS, associate with improved response to the chemotherapy regimen FOLFIRI in adjuvant or metastatic settings. Development of clinically deployable assays for these subtypes and of subtype-specific therapies may contribute to more effective management of this challenging disease.
Resumo:
A haplotype is an m-long binary vector. The XOR-genotype of two haplotypes is the m-vector of their coordinate-wise XOR. We study the following problem: Given a set of XOR-genotypes, reconstruct their haplotypes so that the set of resulting haplotypes can be mapped onto a perfect phylogeny (PP) tree. The question is motivated by studying population evolution in human genetics, and is a variant of the perfect phylogeny haplotyping problem that has received intensive attention recently. Unlike the latter problem, in which the input is "full" genotypes, here we assume less informative input, and so may be more economical to obtain experimentally. Building on ideas of Gusfield, we show how to solve the problem in polynomial time, by a reduction to the graph realization problem. The actual haplotypes are not uniquely determined by that tree they map onto, and the tree itself may or may not be unique. We show that tree uniqueness implies uniquely determined haplotypes, up to inherent degrees of freedom, and give a sufficient condition for the uniqueness. To actually determine the haplotypes given the tree, additional information is necessary. We show that two or three full genotypes suffice to reconstruct all the haplotypes, and present a linear algorithm for identifying those genotypes.