911 resultados para Automatic Query Refinement
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
The ability to create accurate geometric models of neuronal morphology is important for understanding the role of shape in information processing. Despite a significant amount of research on automating neuron reconstructions from image stacks obtained via microscopy, in practice most data are still collected manually. This paper describes Neuromantic, an open source system for three dimensional digital tracing of neurites. Neuromantic reconstructions are comparable in quality to those of existing commercial and freeware systems while balancing speed and accuracy of manual reconstruction. The combination of semi-automatic tracing, intuitive editing, and ability of visualizing large image stacks on standard computing platforms provides a versatile tool that can help address the reconstructions availability bottleneck. Practical considerations for reducing the computational time and space requirements of the extended algorithm are also discussed.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
In the ten years since the first edition of this book appeared there have been significant developments in food process engineering, notably in biotechnology and membrane application. Advances have been made in the use of sensors for process control, and the growth of information technology and on-line computer applications continues apace. In addition, plant investment decisions are increasingly determined by quality assurance considerations and have to incorporate a greater emphasis on health and safety issues. The content of this edition has been rearranged to include descriptions of recent developments and to reflect the influence of new technology on the control and operations of automated plant. Original examples have been retained where relevant and these, together with many new illustrations, provide a comprehensive guide to good practice.
Resumo:
Causal attribution has been one of the most influential frameworks in the literature of achievement motivation, but previous studies considered achievement attribution as relatively deliberate and effortful processes. In the current study, we tested the hypothesis that people automatically attribute their achievement failure to their ability, but reduce the ability attribution in a controlled manner. To address this hypothesis, we measured participants’ causal attribution belief for their task failure either under the cognitive load (load condition) or with full attention (no-load condition). Across two studies, participants attributed task performance to their ability more in the load than in the no-load condition. The increased ability attribution under cognitive load further affected intrinsic motivation. These results indicate that cognitive resources available after feedback play crucial roles in determining causal attribution belief, as well as achievement motivations. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract)
Resumo:
There is evidence that automatic visual attention favors the right side. This study investigated whether this lateral asymmetry interacts with the right hemisphere dominance for visual location processing and left hemisphere dominance for visual shape processing. Volunteers were tested in a location discrimination task and a shape discrimination task. The target stimuli (S2) could occur in the left or right hemifield. They were preceded by an ipsilateral, contralateral or bilateral prime stimulus (S1). The attentional effect produced by the right S1 was larger than that produced by the left S1. This lateral asymmetry was similar between the two tasks suggesting that the hemispheric asymmetries of visual mechanisms do not contribute to it. The finding that it was basically due to a longer reaction time to the left S2 than to the right S2 for the contralateral S1 condition suggests that the inhibitory component of attention is laterally asymmetric.
Resumo:
SPOAN is an autosomal recessive neurodegenerative disorder which was recently characterized by our group in a large inbred Brazilian family with 25 affected individuals. This condition is clinically defined by: 1. congenital optic atrophy; 2. progressive spastic paraplegia with onset in infancy; and 3. progressive motor and sensory axonal neuropathy. Overall, we are now aware of 68 SPOAN patients (45 females and 23 males, with age ranging from 5 to 72 years), 44 of which are presented here for the first time. They were all born in the same geographic micro region. Those 68 patients belong to 43 sibships, 40 of which exhibit parental consanguinity. Sixty-one patients were fully clinically evaluated and 64 were included in the genetic investigation. All molecularly studied patients are homozygotes for D11S1889 at 11q13. This enabled us to reduce the critical region for the SPOAN gene from 4.8 to 2.3 Mb, with a maximum two point lod score of 33.2 (with marker D11S987) and of 27.0 (with marker D11S1889). Three genes located in this newly defined critical region were sequenced, but no pathogenic mutation was detected. The gene responsible for SPOAN remains elusive.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A new occurrence of rankamaite is here described at the Urubu pegmatite, Itinga municipality, Minas Gerais, Brazil. The mineral forms cream-white botryoidal aggregates of acicular to fibrous crystals, intimately associated with simpsonite, thoreaulite, cassiterite, quartz, elbaite, albite, and muscovite. The average of six chemical analyses obtained by electron microprobe is (range in parentheses, wt%): Na(2)O 2.08 (1.95-2.13), K(2)O 2.61 (2.52-2.74), Al(2)O(3) 1.96 (1.89-2.00), Fe(2)O(3) 0.01 (0.00-0.03), TiO(2) 0.02 (0.00-0.06), Ta(2)O(5) 81.04 (79.12-85.18), Nb(2)O(5) 9.49 (8.58-9.86), total 97.21 (95.95-101.50). The chemical formula derived from this analysis is (Na(1.55)K(1.28))(Sigma 2.83)(Ta(8.45)Nb(1.64)Al(0.89)Fe(0.01)(3+)Ti(0.01))(Sigma 11.00)[O(25.02)(OH)(5.98)](Sigma 31.00). Rankamaite is an orthorhombic ""tungsten bronze"" (OTB), crystallizing in the space group Cmmm. Its unit-cell parameters refined from X-ray diffraction powder data are: a = 17.224(3), b = 17.687(3), c = 3.9361(7) angstrom, V = 1199.1(3) angstrom(3), Z = 2. Rietveld refinement of the powder data was undertaken using the structure of LaTa(5)O(14) as a starting model for the rankamaite structure. The structural formula obtained with the Rietveld analyses is: (Na(2.21)K(1.26))Sigma(3.37)(Ta(9.12)NB(1.30) Al(0.59))(Sigma 11.00)[O(26.29)(OH)(4.71)](Sigma 31.00). The tantalum atoms are coordinated by six and seven oxygen atoms in the form of distorted TaO(6) octahedra and TaO(2) pentagonal bipyramids, respectively. Every pentagonal bipyramid shares edges with four octahedra, thus forming Ta(5)O(14) units. The potassium atom is in an 11-fold coordination, whereas one sodium atom is in a 10-fold and the other is in a 12-fold coordination. Raman and infrared spectroscopy were used to investigate the room-temperature spectra of rankamaite.
Resumo:
This work describes a novel methodology for automatic contour extraction from 2D images of 3D neurons (e.g. camera lucida images and other types of 2D microscopy). Most contour-based shape analysis methods cannot be used to characterize such cells because of overlaps between neuronal processes. The proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking stage iteratively labels all valid pixel of branches, tip to a critical region, where it determines the suitable direction to proceed. Finally, the labeled skeleton segments are followed in order to yield the parametric contour of the neuronal shape under analysis. The reported system was successfully tested with respect to several images and the results from a set of three neuron images are presented here, each pertaining to a different class, i.e. alpha, delta and epsilon ganglion cells, containing a total of 34 crossings. The algorithms successfully got across all these overlaps. The method has also been found to exhibit robustness even for images with close parallel segments. The proposed method is robust and may be implemented in an efficient manner. The introduction of this approach should pave the way for more systematic application of contour-based shape analysis methods in neuronal morphology. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.