915 resultados para Precision and recall


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Auxetic materials (or metamaterials) have negative Poisson ratios (NPR) and display the unexpected properties of lateral expansion when stretched, and equal and opposing densification when compressed. Such auxetic materials are being used more frequently in the development of novel products, especially in the fields of intelligent expandable actuators, shape-morphing structures, and minimally invasive implantable devices. Although several micromanufacturing technologies have already been applied to the development of auxetic materials and devices, additional precision is needed to take full advantage of their special mechanical properties. In this study, we present a very promising approach for the development of auxetic materials and devices based on the use of deep reactive ion etching (DRIE). The process stands out for its precision and its potential applications to mass production. To our knowledge, it represents the first time this technology has been applied to the manufacture of auxetic materials with nanometric details. We take into account the present capabilities and challenges linked to the use of DRIE in the development of auxetic materials and auxetic-based devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Participation of two medial temporal lobe structures, the hippocampal region and the amygdala, in long-term declarative memory encoding was examined by using positron emission tomography of regional cerebral glucose. Positron emission tomography scanning was performed in eight healthy subjects listening passively to a repeated sequence of unrelated words. Memory for the words was assessed 24 hr later with an incidental free recall test. The percentage of words freely recalled then was correlated with glucose activity during encoding. The results revealed a striking correlation (r = 0.91, P < 0.001) between activity of the left hippocampal region (centered on the dorsal parahippocampal gyrus) and word recall. No correlation was found between activity of either the left or right amygdala and recall. The findings provide evidence for hippocampal involvement in long-term declarative memory encoding and for the view that the amygdala is not involved with declarative memory formation for nonemotional material.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A high-resolution physical and genetic map of a major fruit weight quantitative trait locus (QTL), fw2.2, has been constructed for a region of tomato chromosome 2. Using an F2 nearly isogenic line mapping population (3472 individuals) derived from Lycopersicon esculentum (domesticated tomato) × Lycopersicon pennellii (wild tomato), fw2.2 has been placed near TG91 and TG167, which have an interval distance of 0.13 ± 0.03 centimorgan. The physical distance between TG91 and TG167 was estimated to be ≤ 150 kb by pulsed-field gel electrophoresis of tomato DNA. A physical contig composed of six yeast artificial chromosomes (YACs) and encompassing fw2.2 was isolated. No rearrangements or chimerisms were detected within the YAC contig based on restriction fragment length polymorphism analysis using YAC-end sequences and anchored molecular markers from the high-resolution map. Based on genetic recombination events, fw2.2 could be narrowed down to a region less than 150 kb between molecular markers TG91 and HSF24 and included within two YACs: YAC264 (210 kb) and YAC355 (300 kb). This marks the first time, to our knowledge, that a QTL has been mapped with such precision and delimited to a segment of cloned DNA. The fact that the phenotypic effect of the fw2.2 QTL can be mapped to a small interval suggests that the action of this QTL is likely due to a single gene. The development of the high-resolution genetic map, in combination with the physical YAC contig, suggests that the gene responsible for this QTL and other QTLs in plants can be isolated using a positional cloning strategy. The cloning of fw2.2 will likely lead to a better understanding of the molecular biology of fruit development and to the genetic engineering of fruit size characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the cerebral cortex, the small volume of the extracellular space in relation to the volume enclosed by synapses suggests an important functional role for this relationship. It is well known that there are atoms and molecules in the extracellular space that are absolutely necessary for synapses to function (e.g., calcium). I propose here the hypothesis that the rapid shift of these atoms and molecules from extracellular to intrasynaptic compartments represents the consumption of a shared, limited resource available to local volumes of neural tissue. Such consumption results in a dramatic competition among synapses for resources necessary for their function. In this paper, I explore a theory in which this resource consumption plays a critical role in the way local volumes of neural tissue operate. On short time scales, this principle of resource consumption permits a tissue volume to choose those synapses that function in a particular context and thereby helps to integrate the many neural signals that impinge on a tissue volume at any given moment. On longer time scales, the same principle aids in the stable storage and recall of information. The theory provides one framework for understanding how cerebral cortical tissue volumes integrate, attend to, store, and recall information. In this account, the capacity of neural tissue to attend to stimuli is intimately tied to the way tissue volumes are organized at fine spatial scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the automatic extension to other languages of TERSEO, a knowledge-based system for the recognition and normalization of temporal expressions originally developed for Spanish. TERSEO was first extended to English through the automatic translation of the temporal expressions. Then, an improved porting process was applied to Italian, where the automatic translation of the temporal expressions from English and from Spanish was combined with the extraction of new expressions from an Italian annotated corpus. Experimental results demonstrate how, while still adhering to the rule-based paradigm, the development of automatic rule translation procedures allowed us to minimize the effort required for porting to new languages. Relying on such procedures, and without any manual effort or previous knowledge of the target language, TERSEO recognizes and normalizes temporal expressions in Italian with good results (72% precision and 83% recall for recognition).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a multilingual method for event ordering based on temporal expression resolution is presented. This method has been implemented through the TERSEO system which consists of three main units: temporal expression recognizing, resolution of the coreference introduced by these expressions, and event ordering. By means of this system, chronological information related to events can be extracted from documental databases. This information is automatically added to the documental database in order to allow its use by question answering systems in those cases referring to temporality. The system has been evaluated obtaining results of 91 % precision and 71 % recall. For this, a blind evaluation process has been developed guaranteing a reliable annotation process that was measured through the kappa factor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study we reconstruct quantitatively the Middle to Upper Miocene climate evolution in the southern Forecarpathian Basin (Central Paratethys area, Northwest Bulgaria) by applying the coexistence approach to 101 well-dated palynofloras isolated from three cores. The climatic evolution is compared with changes in vegetation and palaeogeography. The Middle Miocene was a period of a subtropical/warm-temperate humid climate with mean annual temperature (MAT) between 16 and 18°C and mean annual precipitation (MAP) between 1100 and 1300 mm. Thereby, during the entire Middle Miocene a trend of slightly decreasing temperatures is observed and only small climate fluctuations occur which are presumably related to palaeogeographic reorganisations. The vegetation shows a corresponding trend with a decrease in abundance of palaeotropic and thermophilous elements. The Upper Miocene is characterised by more diverse climatic conditions, probably depending on palaeogeographic and global climatic transformations. The beginning of this period is marked by a slight cooling and a significant drying of the climate, with MAT 13.3-17°C and MAP 652-759 mm. After that, fluctuations of all palaeoclimate parameters occur displaying cycles of humid/dryer and warmer/cooler conditions, which are again well reflected in the vegetation. Our study provides a first quantitative model of the Middle-Upper Miocene palaeoclimate evolution in Southeastern Europe and is characterised by a relatively high precision and resolution with respect to the climate data and stratigraphy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new, fast, continuous flow technique is described for the simultaneous determination of 633 S and delta(34)S using SO masses 48, 49 and 50. Analysis time is similar to5min/sample with measurement precision and accuracy better than +/-0.3parts per thousand. This technique, which has been set up using IAEA Ag2S standards S-1, S-2 and S-3, allows for the fast determination of mass-dependent or mass-independent fractionation (MIF) effects in sulfide, organic sulfur samples and possibly sulfate. Small sample sizes can be analysed directly, without chemical pre-treatment. Robustness of the technique for natural versus artificial standards was demonstrated by analysis of a Canon Diablo troilite, which gave a delta(33)S of 0.04parts per thousand and a delta(34)S of -0.06parts per thousand compared to the values obtained for S-1 of 0.07parts per thousand and -0.20parts per thousand, respectively. Two pyrite samples from a banded-iron formation from the 3710 Ma Isua Greenstone Belt were analysed using this technique and yielded MIF (Delta(33)S of 2.45 and 3.31parts per thousand) comparable to pyrite previously analysed by secondary ion probe. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Three important goals in describing software design patterns are: generality, precision, and understandability. To address these goals, this paper presents an integrated approach to specifying patterns using Object-Z and UML. To achieve the generality goal, we adopt a role-based metamodeling approach to define patterns. With this approach, each pattern is defined as a pattern role model. To achieve precision, we formalize role concepts using Object-Z (a role metamodel) and use these concepts to define patterns (pattern role models). To achieve understandability, we represent the role metamodel and pattern role models visually using UML. Our pattern role models provide a precise basis for pattern-based model transformations or refactoring approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is organised into three parts. In Part 1 relevant literature is reviewed and three critical components in the development of a cognitive approach to instruction are identified. These three components are considered to be the structure of the subject-matter, the learner's cognitive structures, and the learner's cognitive strategies which act as control and transfer devices between the instructional materials and the learner's cognitive structures. Six experiments are described in Part 2 which is divided into two methodologically distinct units. The three experiments of Unit 1 examined how learning from materials constructed from concept name by concept attribute matrices is influenced by learner or experimenter controlled sequence and organisation. The results suggested that the relationships between input organisation, output organisation and recall are complex and highlighted the importance of investigating organisational strategies at both acquisition and recall. The role of subjects previously acquired knowledge and skills in relation to the instructional material was considered to be an important factor. The three experiments of Unit 2 utilised a "diagramming relationships methodology" which was devised as one means of investigating the processes by which new information is assimilated into an individual's cognitive structure. The methodology was found to be useful in identifying cognitive strategies related to successful task performance. The results suggested that errors could be minimised and comprehension improved on the diagramming relationships task by instructing subjects in ways which induced successful processing operations. Part 3 of this thesis highlights salient issues raised by the experimental work within the framework outlined in Part 1 and discusses potential implications for future theoretical developments and research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Contrary to interviewing guidelines, a considerable portion of witness interviews are not recorded. Investigators’ memory, their interview notes, and any subsequent interview reports therefore become important pieces of evidence; the accuracy of interviewers’ memory or such reports is therefore of crucial importance when interviewers testify in court regarding witness interviews. A detailed recollection of the actual exchange during such interviews and how information was elicited from the witness will allow for a better assessment of statement veracity in court. ^ Two studies were designed to examine interviewers’ memory for a prior witness interview. Study One varied interviewer note-taking and type of subsequent interview report written by interviewers by including a sample of undergraduates and implementing a two-week delay between interview and recall. Study Two varied levels of interviewing experience in addition to report type and note-taking by comparing experienced police interviewers to a student sample. Participants interviewed a mock witness about a crime, while taking notes or not, and wrote an interview report two weeks later (Study One) or immediately after (Study Two). Interview reports were written either in a summarized format, which asked interviewers for a summary of everything that occurred during the interview, or verbatim format, which asked interviewers to record in transcript format the questions they asked and the witness’s responses. Interviews were videotaped and transcribed. Transcriptions were compared to interview reports to score for accuracy and omission of interview content. ^ Results from both studies indicate that much interview information is lost between interview and report especially after a two-week delay. The majority of information reported by interviewers is accurate, although even interviewers who recalled information immediately after still reported a troubling amount of inaccurate information. Note-taking was found to increase accuracy and completeness of interviewer reports especially after a two week delay. Report type only influenced recall of interviewer questions. Experienced police interviewers were not any better at recalling a prior witness interview than student interviewers. Results emphasize the need to record witness interviews to allow for more accurate and complete interview reconstruction by interviewers, even if interview notes are available. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, μXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.