869 resultados para Classification of cast net


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article categorises manufacturing strategy design processes and presents the characteristics of resulting strategies. This work will therefore assist practitioners to appreciate the implications of planning activities. The article presents a framework for classifying manufacturing strategy processes and the resulting strategies. Each process and respective strategy is then considered in detail. In this consideration the preferred approach is presented for formulating a world class manufacturing strategy. Finally, conclusions and recommendations for further work are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the large body of research regarding the role of memory in OCD, the results are described as mixed at best (Hermans et al., 2008). For example, inconsistent findings have been reported with respect to basic capacity, intact verbal, and generally affected visuospatial memory. We suggest that this is due to the traditional pursuit of OCD memory impairment as one of the general capacity and/or domain specificity (visuospatial vs. verbal). In contrast, we conclude from our experiments (i.e., Harkin & Kessler, 2009, 2011; Harkin, Rutherford, & Kessler, 2011) and recent literature (e.g., Greisberg & McKay, 2003) that OCD memory impairment is secondary to executive dysfunction, and more specifically we identify three common factors (EBL: Executive-functioning efficiency, Binding complexity, and memory Load) that we generalize to 58 experimental findings from 46 OCD memory studies. As a result we explain otherwise inconsistent research – e.g., intact vs. deficient verbal memory – that are difficult to reconcile within a capacity or domain specific perspective. We conclude by discussing the relationship between our account and others', which in most cases is complementary rather than contradictory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOTIVATION: G protein-coupled receptors (GPCRs) play an important role in many physiological systems by transducing an extracellular signal into an intracellular response. Over 50% of all marketed drugs are targeted towards a GPCR. There is considerable interest in developing an algorithm that could effectively predict the function of a GPCR from its primary sequence. Such an algorithm is useful not only in identifying novel GPCR sequences but in characterizing the interrelationships between known GPCRs. RESULTS: An alignment-free approach to GPCR classification has been developed using techniques drawn from data mining and proteochemometrics. A dataset of over 8000 sequences was constructed to train the algorithm. This represents one of the largest GPCR datasets currently available. A predictive algorithm was developed based upon the simplest reasonable numerical representation of the protein's physicochemical properties. A selective top-down approach was developed, which used a hierarchical classifier to assign sequences to subdivisions within the GPCR hierarchy. The predictive performance of the algorithm was assessed against several standard data mining classifiers and further validated against Support Vector Machine-based GPCR prediction servers. The selective top-down approach achieves significantly higher accuracy than standard data mining methods in almost all cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis presented an overlapping analysis of private law institutions, in response to the arguments that law must be separated into discrete categories. The basis of this overlapping approach was the realist perspective, which emphasises the role of facts and outcomes as the starting point for legal analysis as opposed to legal principle or doctrine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Az egyes nemzetek számviteli szabályozásának vizsgálatánál az adott ország sajátosságaiból eredően részben eltérő szabályozások alakultak ki. Az induktív megközelítésű vizsgálatok jellemzően a szabályozási kérdések széles körét fogják át, de csak néhány tényező mentén közelítve. A cash flow-kimutatások témakörénél a legtöbbször csak azt nézték, hogy van-e előírás a kimutatás elkészítésére, de a részletekkel már kevésbé foglalkoztak. Ebből adódóan e területen viszonylag kis különbséget mutattak ki ezek a felmérések. A szerző kutatása szerint a nemzeti cash flow-kimutatások szabályozásának részleteiben eltérések tapasztalhatók, és ezek alapján a nemzetek klaszterelemzéssel hierarchikusan csoportokba rendezhetők. _____ Research has found that as a result of their particularities, different countries have established partly different accounting frameworks. Studies with inductive approaches typically encompass a wide range of regulatory issues, but based on a limited number of factors only. In the case of Statements of Cash Flows, most studies have so far only examined the existence of rules governing the presentation of the statement, without an in-depth analysis of the details. Therefore, these studies only found relatively minor differences in this field. The author’s research shows that many differences exist in the details of national Cash Flow Statement regulations, which makes it possible to classify the countries in groups using the method of hierarchical clustering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Az utóbbi évtizedekben egyre gyakrabban merült fel a közszolgálati szervezetek értékelésének igénye, és egyre újabb módszerek jelentek meg, amelyek felvetették ezek rendszerezésének szükségességét mind a gyakorlatban, mind a kutatásokban. A szerző a szakirodalomban fellelhető osztályozási kísérleteknek és az értékelés szakterülete szempontjainak figyelembevételével javaslatot tesz a közszolgálati szervezetek értékelési módszereinek osztályozási keretrendszerére. Az osztályozási szempontok között szerepel az értékelő helyzete, az értékelés szerepe és a megismerés módszere. Az osztályozási keretrendszer tartalmát a szerző példákkal is illusztrálja, amely jelzi a modell gyakorlati alkalmazhatóságát. Ugyanakkor a keretrendszer a kutatások fókuszának és érvényességi körének meghatározásában is segítséget nyújthat. _____ In the last decades the need of the evaluation of public sector organizations has emerged more and more often, and many new methods have shown up that has raised the need of their classification in practice and in research, as well. Based on literature review and the literature of evaluation the author makes a proposal on the classification framework of the evaluation methods of public sector organizations. The dimensions of the classification include the situation of evaluator, the role of evaluation and the approach of knowledge. The author illustrates the content of the framework with examples referring to the applicability of the model in practice. At the same time, the framework is also useful in determining the focus or the scope of research projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flow Cytometry analyzers have become trusted companions due to their ability to perform fast and accurate analyses of human blood. The aim of these analyses is to determine the possible existence of abnormalities in the blood that have been correlated with serious disease states, such as infectious mononucleosis, leukemia, and various cancers. Though these analyzers provide important feedback, it is always desired to improve the accuracy of the results. This is evidenced by the occurrences of misclassifications reported by some users of these devices. It is advantageous to provide a pattern interpretation framework that is able to provide better classification ability than is currently available. Toward this end, the purpose of this dissertation was to establish a feature extraction and pattern classification framework capable of providing improved accuracy for detecting specific hematological abnormalities in flow cytometric blood data. ^ This involved extracting a unique and powerful set of shift-invariant statistical features from the multi-dimensional flow cytometry data and then using these features as inputs to a pattern classification engine composed of an artificial neural network (ANN). The contribution of this method consisted of developing a descriptor matrix that can be used to reliably assess if a donor’s blood pattern exhibits a clinically abnormal level of variant lymphocytes, which are blood cells that are potentially indicative of disorders such as leukemia and infectious mononucleosis. ^ This study showed that the set of shift-and-rotation-invariant statistical features extracted from the eigensystem of the flow cytometric data pattern performs better than other commonly-used features in this type of disease detection, exhibiting an accuracy of 80.7%, a sensitivity of 72.3%, and a specificity of 89.2%. This performance represents a major improvement for this type of hematological classifier, which has historically been plagued by poor performance, with accuracies as low as 60% in some cases. This research ultimately shows that an improved feature space was developed that can deliver improved performance for the detection of variant lymphocytes in human blood, thus providing significant utility in the realm of suspect flagging algorithms for the detection of blood-related diseases.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present here a 4-year dataset (2001–2004) on the spatial and temporal patterns of aboveground net primary production (ANPP) by dominant primary producers (sawgrass, periphyton, mangroves, and seagrasses) along two transects in the oligotrophic Florida Everglades coastal landscape. The 17 sites of the Florida Coastal Everglades Long Term Ecological Research (FCE LTER) program are located along fresh-estuarine gradients in Shark River Slough (SRS) and Taylor River/C-111/Florida Bay (TS/Ph) basins that drain the western and southern Everglades, respectively. Within the SRS basin, sawgrass and periphyton ANPP did not differ significantly among sites but mangrove ANPP was highest at the site nearest the Gulf of Mexico. In the southern Everglades transect, there was a productivity peak in sawgrass and periphyton at the upper estuarine ecotone within Taylor River but no trends were observed in the C-111 Basin for either primary producer. Over the 4 years, average sawgrass ANPP in both basins ranged from 255 to 606 g m−2 year−1. Average periphyton productivity at SRS and TS/Ph was 17–68 g C m−2 year−1 and 342–10371 g C m−2 year−1, respectively. Mangrove productivity ranged from 340 g m−2 year−1 at Taylor River to 2208 g m−2 year−1 at the lower estuarine Shark River site. Average Thalassia testudinum productivity ranged from 91 to 396 g m−2 year−1 and was 4-fold greater at the site nearest the Gulf of Mexico than in eastern Florida Bay. There were no differences in periphyton productivity at Florida Bay. Interannual comparisons revealed no significant differences within each primary producer at either SRS or TS/Ph with the exception of sawgrass at SRS and the C−111 Basin. Future research will address difficulties in assessing and comparing ANPP of different primary producers along gradients as well as the significance of belowground production to the total productivity of this ecosystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

South Florida’s watersheds have endured a century of urban and agricultural development and disruption of their hydrology. Spatial characterization of South Florida’s estuarine and coastal waters is important to Everglades’ restoration programs. We applied Factor Analysis and Hierarchical Clustering of water quality data in tandem to characterize and spatially subdivide South Florida’s coastal and estuarine waters. Segmentation rendered forty-four biogeochemically distinct water bodies whose spatial distribution is closely linked to geomorphology, circulation, benthic community pattern, and to water management. This segmentation has been adopted with minor changes by federal and state environmental agencies to derive numeric nutrient criteria.