19 resultados para Information Technologies Classification

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examines the relation between selection power and selection labor for information retrieval (IR). It is the first part of the development of a labor theoretic approach to IR. Existing models for evaluation of IR systems are reviewed and the distinction of operational from experimental systems partly dissolved. The often covert, but powerful, influence from technology on practice and theory is rendered explicit. Selection power is understood as the human ability to make informed choices between objects or representations of objects and is adopted as the primary value for IR. Selection power is conceived as a property of human consciousness, which can be assisted or frustrated by system design. The concept of selection power is further elucidated, and its value supported, by an example of the discrimination enabled by index descriptions, the discovery of analogous concepts in partly independent scholarly and wider public discourses, and its embodiment in the design and use of systems. Selection power is regarded as produced by selection labor, with the nature of that labor changing with different historical conditions and concurrent information technologies. Selection labor can itself be decomposed into description and search labor. Selection labor and its decomposition into description and search labor will be treated in a subsequent article, in a further development of a labor theoretic approach to information retrieval.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the relation between technical possibilities, liberal logics, and the concrete reconfiguration of markets. It focuses on the enrolling of innovations in communication and information technologies into the markets traditionally dominated by stock exchanges. With the development of capacities to trade on-screen, the power of incumbent market makers has been challenged as a less stable array of competing quasi-public and private marketplaces emerges. Developing a case study of the Toronto Stock Exchange, I argue that narrative emphasis on the performative power of sociotechnical innovations, the deterritorialisation of financial relations, and the erosion of state capacities needs qualification. A case is made for the importance of developing an understanding of: the spaces of encounter between emerging social technologies and property rights, rules of exchange, and structures of governance; and the interplay of orderings of different institutional composition and spatial reach in the reconfiguration of market architectures. Only then can a better grasp be gained of the evolving dynamics between making markets, the regulatory powers of the state, and their delimitations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advances in computational and information technologies have facilitated the acquisition of geospatial information for regional and national soil and geology databases. These have been completed for a range of purposes from geological and soil baseline mapping to economic prospecting and land resource assessment, but have become increasingly used for forensic purposes. On the question of provenance of a questioned sample, the geologist or soil scientist will draw invariably on prior expert knowledge and available digital map and database sources in a ‘pseudo Bayesian’ approach. The context of this paper is the debate on whether existing (digital) geology and soil databases are indeed useful and suitable for forensic inferences. Published and new case studies are used to explore issues of completeness, consistency, compatibility and applicability in relation to the use of digital geology and soil databases in environmental and criminal forensics. One key theme that emerges is that, despite an acknowledgement that databases can be neither exhaustive nor precise enough to portray spatial variability at the scene of crime scale, coupled with expert knowledge, they play an invaluable role in providing background or
reference material in a criminal investigation. Moreover databases can offer an independent control set of samples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.

Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.

Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.

Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selection power is taken as the fundamental value for information retrieval systems. Selection power is regarded as produced by selection labor, which itself separates historically into description and search labor. As forms of mental labor, description and search labor participate in the conditions for labor and for mental labor. Concepts and distinctions applicable to physical and mental labor are indicated, introducing the necessity of labor for survival, the idea of technology as a human construction, and the possibility of the transfer of human labor to technology. Distinctions specific to mental labor, particular between semantic and syntactic labor, are introduced. Description labor is exemplified by cataloging, classification, and database description, can be more formally understood as the labor involved in the transformation of objects for description into searchable descriptions, and is also understood to include interpretation. The costs of description labor are discussed. Search labor is conceived as the labor expended in searching systems. For both description and search labor, there has been a progressive reduction in direct human labor, with its syntactic aspects transferred to technology, effectively compelled by the high relative costs of direct human labor compared to machine processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose of review: The aim of this article is to summarize the latest information on microbicide formulations for prevention of sexual transmission of HIV infection in women. Recent findings: Although early microbicide formulations were conventionally coitally dependent gel products, new technologies are being developed for vaginal delivery of anti-HIV agents. Intravaginal rings for delivery of microbicides, for example, are being developed and evaluated clinically. Safety and acceptability data are available for many microbicide gels and for one microbicide intravaginal ring. Other microbicide formulations in development for once daily or other vaginal administration strategies include films, tablets, and ovules. Various microbicide formulations for rectal administration are also in development. Summary: New microbicide formulations in development are addressing many of the issues with the original gels such as coital dependency, frequency of use, acceptability, compliance, cost, and adaptability to large-scale production. All of these dosage forms are promising options for safe, effective, and acceptable microbicide products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.