937 resultados para Classification time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Baseline severity and clinical stroke syndrome (Oxford Community Stroke Project, OCSP) classification are predictors of outcome in stroke. We used data from the ‘Tinzaparin in Acute Ischaemic Stroke Trial’ (TAIST) to assess the relationship between stroke severity, early recovery, outcome and OCSP syndrome. Methods: TAIST was a randomised controlled trial assessing the safety and efficacy of tinzaparin versus aspirin in 1,484 patients with acute ischaemic stroke. Severity was measured as the Scandinavian Neurological Stroke Scale (SNSS) at baseline and days 4, 7 and 10, and baseline OCSP clinical classification recorded: total anterior circulation infarct (TACI), partial anterior circulation infarct (PACI), lacunar infarct (LACI) and posterior circulation infarction (POCI). Recovery was calculated as change in SNSS from baseline at day 4 and 10. The relationship between stroke syndrome and SNSS at days 4 and 10, and outcome (modified Rankin scale at 90 days) were assessed. Results: Stroke severity was significantly different between TACI (most severe) and LACI (mildest) at all four time points (p<0.001), with no difference between PACI and POCI. The largest change in SNSS score occurred between baseline and day 4; improvement was least in TACI (median 2 units), compared to other groups (median 3 units) (p<0.001). If SNSS did not improve by day 4, then early recovery and late functional outcome tended to be limited irrespective of clinical syndrome (SNSS, baseline: 31, day 10: 32; mRS, day 90: 4); patients who recovered early tended to continue to improve and had better functional outcome irrespective of syndrome (SNSS, baseline: 35, day 10: 50; mRS, day 90: 2). Conclusions: Although functional outcome is related to baseline clinical syndrome (best with LACI, worst with TACI), patients who improve early have a more favourable functional outcome, irrespective of their OCSP syndrome. Hence, patients with a TACI syndrome may still achieve a reasonable outcome if early recovery occurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As an immune-inspired algorithm, the Dendritic Cell Algorithm (DCA), produces promising performance in the field of anomaly detection. This paper presents the application of the DCA to a standard data set, the KDD 99 data set. The results of different implementation versions of the DCA, including antigen multiplier and moving time windows, are reported. The real-valued Negative Selection Algorithm (NSA) using constant-sized detectors and the C4.5 decision tree algorithm are used, to conduct a baseline comparison. The results suggest that the DCA is applicable to KDD 99 data set, and the antigen multiplier and moving time windows have the same effect on the DCA for this particular data set. The real-valued NSA with contant-sized detectors is not applicable to the data set. And the C4.5 decision tree algorithm provides a benchmark of the classification performance for this data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to reduce serious health incidents, individuals with high risks need to be identified as early as possible so that effective intervention and preventive care can be provided. This requires regular and efficient assessments of risk within communities that are the first point of contacts for individuals. Clinical Decision Support Systems CDSSs have been developed to help with the task of risk assessment, however such systems and their underpinning classification models are tailored towards those with clinical expertise. Communities where regular risk assessments are required lack such expertise. This paper presents the continuation of GRiST research team efforts to disseminate clinical expertise to communities. Based on our earlier published findings, this paper introduces the framework and skeleton for a data collection and risk classification model that evaluates data redundancy in real-time, detects the risk-informative data and guides the risk assessors towards collecting those data. By doing so, it enables non-experts within the communities to conduct reliable Mental Health risk triage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the problem of semantic place categorization in mobile robotics is addressed by considering a time-based probabilistic approach called dynamic Bayesian mixture model (DBMM), which is an improved variation of the dynamic Bayesian network. More specifically, multi-class semantic classification is performed by a DBMM composed of a mixture of heterogeneous base classifiers, using geometrical features computed from 2D laserscanner data, where the sensor is mounted on-board a moving robot operating indoors. Besides its capability to combine different probabilistic classifiers, the DBMM approach also incorporates time-based (dynamic) inferences in the form of previous class-conditional probabilities and priors. Extensive experiments were carried out on publicly available benchmark datasets, highlighting the influence of the number of time-slices and the effect of additive smoothing on the classification performance of the proposed approach. Reported results, under different scenarios and conditions, show the effectiveness and competitive performance of the DBMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classification schemes are built at a particular point in time; at inception, they reflect a worldview indicative of that time. This is their strength, but results in potential weak- nesses as worldviews change. For example, if a scheme of mathematics is not updated even though the state of the art has changed, then it is not a very useful scheme to users for the purposes of information retrieval. However, change in schemes is a good thing. Changing allows designers of schemes to update their model and serves as a responsible mediator between resources and users. But change does come at a cost. In the print world, we revise universal clas- sification schemes—sometimes in drastic ways—and this means that over time, the power of a classification scheme to collocate is compromised if we do not account for scheme change in the organization of affected physical resources. If we understand this phenomenon in the print world, we can design ameliorations for the digital world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Describes three units of time helpful for understanding and evaluating classificatory structures: long time (versions and states of classification schemes), short time (the act of indexing as repeated ritual or form), and micro-time (where stages of the interpretation process of indexing are separated out and inventoried). Concludes with a short discussion of how time and the impermanence of classification also conjures up an artistic conceptualization of indexing, and briefly uses that to question the seemingly dominant understanding of classification practice as outcome of scientific management and assembly line thought.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the universe of knowledge and subjects change over time, indexing languages like classification schemes, accommodate that change by restructuring. Restructuring indexing languages affects indexer and cataloguer work. Subjects may split or lump together. They may disappear only to reappear later. And new subjects may emerge that were assumed to be already present, but not clearly articulated (Miksa, 1998). In this context we have the complex relationship between the indexing language, the text being described, and the already described collection (Tennis, 2007). It is possible to imagine indexers placing a document into an outdated class, because it is the one they have already used for their collection. However, doing this erases the semantics in the present indexing language. Given this range of choice in the context of indexing language change, the question arises, what does this look like in practice? How often does this occur? Further, what does this phenomenon tell us about subjects in indexing languages? Does the practice we observe in the reaction to indexing language change provide us evidence of conceptual models of subjects and subject creation? If it is incomplete, but gets us close, what evidence do we still require?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the International Society for Knowledge Organization, we often consider knowledge organization systems to comprise catalogues, thesauri, and bibliothecal classification schemes – schemes for library arrangement. In recent years we have added ontologies and folksonomies to our sphere of study. In all of these cases it seems we are concerned with improving access to information. We want a good system.And much of the literature from the late 19th into the late 20th century took that as their goal – to analyze the world of knowledge and the structures of representing it as its objects of study; again, with the ethos for creating a good system. In most cases this meant we had to be correct in our assertions about the universe of knowledge and the relationships that obtain between its constituent parts. As a result much of the literature of knowledge organization is prescriptive – instructing designers and professionals how to build or use the schemes correctly – that is to maximize redundant success in accessing information.In 2005, there was a turn in some of the knowledge organization literature. It has been called the descriptive turn. This is in relation to the otherwise prescriptive efforts of researchers in KO. And it is the descriptive turn that makes me think of context, languages, and cultures in knowledge organization–the theme of this year’s conference.Work in the descriptive turn questions the basic assumptions about what we want to do when we create, implement, maintain, and evaluate knowledge organization systems. Following on these assumptions researchers have examined a wider range of systems and question the motivations behind system design. Online websites that allow users to curate their own collections are one such addition, for example Pinterest (cf., Feinberg, 2011). However, researchers have also looked back at other lineages of organizing to compare forms and functions. For example, encyclopedias, catalogues raisonnés, archival description, and winter counts designed and used by Native Americans.In this case of online curated collections, Melanie Feinberg has started to examine the craft of curation, as she calls it. In this line of research purpose, voice, and rhetorical stance surface as design considerations. For example, in the case of the Pinterest, users are able and encouraged to create boards. The process of putting together these boards is an act of curation in contemporary terminology. It is describing this craft that comes from the descriptive turn in KO.In the second case, when researchers in the descriptive turn look back at older and varied examples of knowledge organization systems, we are looking for a full inventory of intent and inspiration for future design. Encyclopedias, catalogues raisonnés, archival description, and works of knowledge organization in other cultures provide a rich world for the descriptive turn. And researchers have availed themselves of this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The time for conducting Preventive Maintenance (PM) on an asset is often determined using a predefined alarm limit based on trends of a hazard function. In this paper, the authors propose using both hazard and reliability functions to improve the accuracy of the prediction particularly when the failure characteristic of the asset whole life is modelled using different failure distributions for the different stages of the life of the asset. The proposed method is validated using simulations and case studies.