878 resultados para Corpus callosum agenesis


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aux confluences historiques et conceptuelles de la modernité, de la technologie, et de l’« humain », les textes de notre corpus négocient et interrogent de façon critique les possibilités matérielles et symboliques de la prothèse, ses aspects phénoménologiques et spéculatifs : du côté subjectiviste et conceptualiste avec une philosophie de la conscience, avec Merleau-Ponty ; et de l’autre avec les épistémologues du corps et historiens de la connaissance Canguilhem et Foucault. Le trope prometteur de la prothèse impacte sur les formations discursives et non-discursives concernant la reconstruction des corps, là où la technologie devient le corrélat de l’identité. La technologie s’humanise au contact de l’homme, et, en révélant une hybridité supérieure, elle phagocyte l’humain du même coup. Ce travail de sociologie des sciences (Latour, 1989), ou encore d’anthropologie des sciences (Hakken, 2001) ou d’anthropologie bioculturelle (Andrieu, 1993; Andrieu, 2006; Andrieu, 2007a) se propose en tant qu’exemple de la contribution potentielle que l’anthropologie biologique et culturelle peut rendre à la médecine reconstructrice et que la médecine reconstructrice peut rendre à la plastique de l’homme ; l’anthropologie biologique nous concerne dans la transformation biologique du corps humain, par l’outil de la technologie, tant dans son histoire de la reconstruction mécanique et plastique, que dans son projet d’augmentation bionique. Nous établirons une continuité archéologique, d’une terminologie foucaldienne, entre les deux pratiques. Nous questionnons les postulats au sujet des relations nature/culture, biologie/contexte social, et nous présentons une approche définitionnelle de la technologie, pierre angulaire de notre travail théorique. Le trope de la technologie, en tant qu’outil adaptatif de la culture au service de la nature, opère un glissement sémantique en se plaçant au service d’une biologie à améliorer. Une des clés de notre recherche sur l’augmentation des fonctions et de l’esthétique du corps humain réside dans la redéfinition même de ces relations ; et dans l’impact de l’interpénétration entre réalité et imaginaire dans la construction de l’objet scientifique, dans la transformation du corps humain. Afin de cerner les enjeux du discours au sujet de l’« autoévolution » des corps, les théories évolutionnistes sont abordées, bien que ne représentant pas notre spécialité. Dans le cadre de l’autoévolution, et de l’augmentation bionique de l’homme, la somation culturelle du corps s’exerce par l’usage des biotechnologies, en rupture épistémologique de la pensée darwinienne, bien que l’acte d’hybridation évolutionnaire soit toujours inscrit dans un dessein de maximisation bionique/génétique du corps humain. Nous explorons les courants de la pensée cybernétique dans leurs actions de transformation biologique du corps humain, de la performativité des mutilations. Ainsi technologie et techniques apparaissent-elles indissociables de la science, et de son constructionnisme social.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper I present an analysis of the language used by the National Endowment for Democracy (NED) on its website (NED, 2008). The specific focus of the analysis is on the NED's high usage of the word “should” revealed in computer assisted corpus analysis using Leximancer. Typically we use the word “should” as a term to propose specific courses of action for ourselves and others. It is a marker of obligation and “oughtness”. In other words, its systematic institutional use can be read as a statement of ethics, of how the NED thinks the world ought to behave. As an ostensibly democracy-promoting institution, and one with a clear agenda of implementing American foreign policy, the ethics of NED are worth understanding. Analysis reveals a pattern of grammatical metaphor in which “should” is often deployed counter intuitively, and sometimes ambiguously, as a truth-making tool rather than one for proposing action. The effect is to present NED's imperatives for action as matters of fact rather than ethical or obligatory claims.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forensic imaging has been facing scalability challenges for some time. As disk capacity growth continues to outpace storage IO bandwidth, the demands placed on storage and time are ever increasing. Data reduction and de-duplication technologies are now commonplace in the Enterprise space, and are potentially applicable to forensic acquisition. Using the new AFF4 forensic file format we employ a hash based compression scheme to leverage an existing corpus of images, reducing both acquisition time and storage requirements. This paper additionally describes some of the recent evolution in the AFF4 file format making the efficient implementation of hash based imaging a reality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces the problem of conceptual ambiguity, or Shades of Meaning (SoM) that can exist around a term or entity. As an example consider President Ronald Reagan the ex-president of the USA, there are many aspects to him that are captured in text; the Russian missile deal, the Iran-contra deal and others. Simply finding documents with the word “Reagan” in them is going to return results that cover many different shades of meaning related to "Reagan". Instead it may be desirable to retrieve results around a specific shade of meaning of "Reagan", e.g., all documents relating to the Iran-contra scandal. This thesis investigates computational methods for identifying shades of meaning around a word, or concept. This problem is related to word sense ambiguity, but is more subtle and based less on the particular syntactic structures associated with or around an instance of the term and more with the semantic contexts around it. A particularly noteworthy difference from typical word sense disambiguation is that shades of a concept are not known in advance. It is up to the algorithm itself to ascertain these subtleties. It is the key hypothesis of this thesis that reducing the number of dimensions in the representation of concepts is a key part of reducing sparseness and thus also crucial in discovering their SoMwithin a given corpus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relevance Feedback (RF) has been proven very effective for improving retrieval accuracy. Adaptive information filtering (AIF) technology has benefited from the improvements achieved in all the tasks involved over the last decades. A difficult problem in AIF has been how to update the system with new feedback efficiently and effectively. In current feedback methods, the updating processes focus on updating system parameters. In this paper, we developed a new approach, the Adaptive Relevance Features Discovery (ARFD). It automatically updates the system's knowledge based on a sliding window over positive and negative feedback to solve a nonmonotonic problem efficiently. Some of the new training documents will be selected using the knowledge that the system currently obtained. Then, specific features will be extracted from selected training documents. Different methods have been used to merge and revise the weights of features in a vector space. The new model is designed for Relevance Features Discovery (RFD), a pattern mining based approach, which uses negative relevance feedback to improve the quality of extracted features from positive feedback. Learning algorithms are also proposed to implement this approach on Reuters Corpus Volume 1 and TREC topics. Experiments show that the proposed approach can work efficiently and achieves the encouragement performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we extend the concept of speaker annotation within a single-recording, or speaker diarization, to a collection wide approach we call speaker attribution. Accordingly, speaker attribution is the task of clustering expectantly homogenous intersession clusters obtained using diarization according to common cross-recording identities. The result of attribution is a collection of spoken audio across multiple recordings attributed to speaker identities. In this paper, an attribution system is proposed using mean-only MAP adaptation of a combined-gender UBM to model clusters from a perfect diarization system, as well as a JFA-based system with session variability compensation. The normalized cross-likelihood ratio is calculated for each pair of clusters to construct an attribution matrix and the complete linkage algorithm is employed to conduct clustering of the inter-session clusters. A matched cluster purity and coverage of 87.1% was obtained on the NIST 2008 SRE corpus.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This special issue presents an excellent opportunity to study applied epistemology in public policy. This is an important task because the arena of public policy is the social domain in which macro conditions for ‘knowledge work’ and ‘knowledge industries’ are defined and created. We argue that knowledge-related public policy has become overly concerned with creating the politico-economic parameters for the commodification of knowledge. Our policy scope is broader than that of Fuller (1988), who emphasizes the need for a social epistemology of science policy. We extend our focus to a range of policy documents that include communications, science, education and innovation policy (collectively called knowledge-related public policy in acknowledgement of the fact that there is no defined policy silo called ‘knowledge policy’), all of which are central to policy concerned with the ‘knowledge economy’ (Rooney and Mandeville, 1998). However, what we will show here is that, as Fuller (1995) argues, ‘knowledge societies’ are not industrial societies permeated by knowledge, but that knowledge societies are permeated by industrial values. Our analysis is informed by an autopoietic perspective. Methodologically, we approach it from a sociolinguistic position that acknowledges the centrality of language to human societies (Graham, 2000). Here, what we call ‘knowledge’ is posited as a social and cognitive relationship between persons operating on and within multiple social and non-social (or, crudely, ‘physical’) environments. Moreover, knowing, we argue, is a sociolinguistically constituted process. Further, we emphasize that the evaluative dimension of language is most salient for analysing contemporary policy discourses about the commercialization of epistemology (Graham, in press). Finally, we provide a discourse analysis of a sample of exemplary texts drawn from a 1.3 million-word corpus of knowledge-related public policy documents that we compiled from local, state, national and supranational legislatures throughout the industrialized world. Our analysis exemplifies a propensity in policy for resorting to technocratic, instrumentalist and anti-intellectual views of knowledge in policy. We argue that what underpins these patterns is a commodity-based conceptualization of knowledge, which is underpinned by an axiology of narrowly economic imperatives at odds with the very nature of knowledge. The commodity view of knowledge, therefore, is flawed in its ignorance of the social systemic properties of knowing’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, I show how new spaces are being prefigured for colonisation in the language of contemporary technology policy. Drawing on a corpus of 1.3 million words collected from technology policy centres throughout the world, I show the role of policy language in creating the foundations of an emergent form of political economy. The analysis is informed by principles from critical discourse analysis (CDA) and classical political economy. It foregrounds a functional aspect of language called process metaphor to show how aspects of human activity are prefigured for mass commodification by the manipulation of irrealis spaces. I also show how the fundamental element of any new political economy, the property element, is being largely ignored. The potential creation of a global space as concrete as landed property – electromagnetic spectrum – has significant ramifications for the future of social relations in any global “knowledge economy”.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a framework for evaluating information retrieval of medical records. We use the BLULab corpus, a large collection of real-world de-identified medical records. The collection has been hand coded by clinical terminol- ogists using the ICD-9 medical classification system. The ICD codes are used to devise queries and relevance judge- ments for this collection. Results of initial test runs using a baseline IR system are provided. Queries and relevance judgements are online to aid further research in medical IR. Please visit: http://koopman.id.au/med_eval.