977 resultados para Eyewitness identification accuracy


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ammonium nitrate fuel oil (ANFO) is an explosive used in many civil applications. In Brazil, ANFO has unfortunately also been used in criminal attacks, mainly in automated teller machine (ATM) explosions. In this paper, we describe a detailed characterization of the ANFO composition and its two main constituents (diesel and a nitrate explosive) using high resolution and accuracy mass spectrometry performed on an FT-ICR-mass spectrometer with electrospray ionization (ESI(±)-FTMS) in both the positive and negative ion modes. Via ESI(-)-MS, an ion marker for ANFO was characterized. Using a direct and simple ambient desorption/ionization technique, i.e., easy ambient sonic-spray ionization mass spectrometry (EASI-MS), in a simpler, lower accuracy but robust single quadrupole mass spectrometer, the ANFO ion marker was directly detected from the surface of banknotes collected from ATM explosion theft.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The accuracy of modelling of rotor systems composed of rotors, oil film bearings and a flexible foundation, is evaluated and discussed in this paper. The model validation of different models has been done by comparing experimental results with numerical results by means. The experimental data have been obtained with a fully instrumented four oil film bearing, two shafts test rig. The fault models are then used in the frame of a model based malfunction identification procedure, based on a least square fitting approach applied in the frequency domain. The capability of distinguishing different malfunctions has been investigated, even if they can create similar effects (such as unbalance, rotor bow, coupling misalignment and others) from shaft vibrations measured in correspondence of the bearings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Historically, memory has been evaluated by examining how much is remembered, however a more recent conception of memory focuses on the accuracy of memories. When using this accuracy-oriented conception of memory, unlike with the quantity-oriented approach, memory does not always deteriorate over time. A possible explanation for this seemingly surprising finding lies in the metacognitive processes of monitoring and control. Use of these processes allows people to withhold responses of which they are unsure, or to adjust the precision of responses to a level that is broad enough to be correct. The ability to accurately report memories has implications for investigators who interview witnesses to crimes, and those who evaluate witness testimony. ^ This research examined the amount of information provided, accuracy, and precision of responses provided during immediate and delayed interviews about a videotaped mock crime. The interview format was manipulated such that a single free narrative response was elicited, or a series of either yes/no or cued questions were asked. Instructions provided by the interviewer indicated to the participants that they should either stress being informative, or being accurate. The interviews were then transcribed and scored. ^ Results indicate that accuracy rates remained stable and high after a one week delay. Compared to those interviewed immediately, after a delay participants provided less information and responses that were less precise. Participants in the free narrative condition were the most accurate. Participants in the cued questions condition provided the most precise responses. Participants in the yes/no questions condition were most likely to say “I don’t know”. The results indicate that people are able to monitor their memories and modify their reports to maintain high accuracy. When control over precision was not possible, such as in the yes/no condition, people said “I don’t know” to maintain accuracy. However when withholding responses and adjusting precision were both possible, people utilized both methods. It seems that concerns that memories reported after a long retention interval might be inaccurate are unfounded. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Historically, memory has been evaluated by examining how much is remembered, however a more recent conception of memory focuses on the accuracy of memories. When using this accuracy-oriented conception of memory, unlike with the quantity-oriented approach, memory does not always deteriorate over time. A possible explanation for this seemingly surprising finding lies in the metacognitive processes of monitoring and control. Use of these processes allows people to withhold responses of which they are unsure, or to adjust the precision of responses to a level that is broad enough to be correct. The ability to accurately report memories has implications for investigators who interview witnesses to crimes, and those who evaluate witness testimony. This research examined the amount of information provided, accuracy, and precision of responses provided during immediate and delayed interviews about a videotaped mock crime. The interview format was manipulated such that a single free narrative response was elicited, or a series of either yes/no or cued questions were asked. Instructions provided by the interviewer indicated to the participants that they should either stress being informative, or being accurate. The interviews were then transcribed and scored. Results indicate that accuracy rates remained stable and high after a one week delay. Compared to those interviewed immediately, after a delay participants provided less information and responses that were less precise. Participants in the free narrative condition were the most accurate. Participants in the cued questions condition provided the most precise responses. Participants in the yes/no questions condition were most likely to say “I don’t know”. The results indicate that people are able to monitor their memories and modify their reports to maintain high accuracy. When control over precision was not possible, such as in the yes/no condition, people said “I don’t know” to maintain accuracy. However when withholding responses and adjusting precision were both possible, people utilized both methods. It seems that concerns that memories reported after a long retention interval might be inaccurate are unfounded.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e.,the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aim of the present study was to develop a statistical approach to define the best cut-off Copy number alterations (CNAs) calling from genomic data provided by high throughput experiments, able to predict a specific clinical end-point (early relapse, 18 months) in the context of Multiple Myeloma (MM). 743 newly diagnosed MM patients with SNPs array-derived genomic and clinical data were included in the study. CNAs were called both by a conventional (classic, CL) and an outcome-oriented (OO) method, and Progression Free Survival (PFS) hazard ratios of CNAs called by the two approaches were compared. The OO approach successfully identified patients at higher risk of relapse and the univariate survival analysis showed stronger prognostic effects for OO-defined high-risk alterations, as compared to that defined by CL approach, statistically significant for 12 CNAs. Overall, 155/743 patients relapsed within 18 months from the therapy start. A small number of OO-defined CNAs were significantly recurrent in early-relapsed patients (ER-CNAs) - amp1q, amp2p, del2p, del12p, del17p, del19p -. Two groups of patients were identified either carrying or not ≥1 ER-CNAs (249 vs. 494, respectively), the first one with significantly shorter PFS and overall survivals (OS) (PFS HR 2.15, p<0001; OS HR 2.37, p<0.0001). The risk of relapse defined by the presence of ≥1 ER-CNAs was independent from those conferred both by R-IIS 3 (HR=1.51; p=0.01) and by low quality (< stable disease) clinical response (HR=2.59 p=0.004). Notably, the type of induction therapy was not descriptive, suggesting that ER is strongly related to patients’ baseline genomic architecture. In conclusion, the OO- approach employed allowed to define CNAs-specific dynamic clonality cut-offs, improving the CNAs calls’ accuracy to identify MM patients with the highest probability to ER. As being outcome-dependent, the OO-approach is dynamic and might be adjusted according to the selected outcome variable of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many studies that compare the accuracy of multislice (MSCT) and cone beam (CBCT) computed tomography for evaluations in the maxillofacial region. However, further studies comparing both acquisition techniques for the evaluation of simulated mandibular bone lesions are needed. The aim of this study was to compare the accuracy of MSCT and CBCT in the diagnosis of simulated mandibular bone lesions by means of cross sectional images and axial/MPR slices. Lesions with different dimensions, shape and locularity were produced in 15 dry mandibles. The images were obtained following the cross sectional and axial/MPR (Multiplanar Reconstruction) imaging protocols and were interpreted independently. CBCT and MSCT showed similar results in depicting the percentage of cortical bone involvement, with great sensitivity and specificity (p < 0.005). There were no significant intra- or inter-examiner differences between axial/MPR images and cross sectional images with regard to sensitivity and specificity. CBCT showed results similar to those of MSCT for the identification of the number of simulated bone lesions. Cross sectional slices and axial/MPR images presented high accuracy, proving useful for bone lesion diagnosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among several process variability sources, valve friction and inadequate controller tuning are supposed to be two of the most prevalent. Friction quantification methods can be applied to the development of model-based compensators or to diagnose valves that need repair, whereas accurate process models can be used in controller retuning. This paper extends existing methods that jointly estimate the friction and process parameters, so that a nonlinear structure is adopted to represent the process model. The developed estimation algorithm is tested with three different data sources: a simulated first order plus dead time process, a hybrid setup (composed of a real valve and a simulated pH neutralization process) and from three industrial datasets corresponding to real control loops. The results demonstrate that the friction is accurately quantified, as well as ""good"" process models are estimated in several situations. Furthermore, when a nonlinear process model is considered, the proposed extension presents significant advantages: (i) greater accuracy for friction quantification and (ii) reasonable estimates of the nonlinear steady-state characteristics of the process. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flavone C-glucoside, vicenin-2, in semi-purified extracts of the leaves of Lychnophora ericoides was quantified in rat plasma samples using a method based on reversed-phase high performance liquid chromatography coupled to tandem mass spectrometry. Vicenin-2 was analyzed on a LiChrospher (R) RP18 column using an isocratic mobile phase consisting of a mixture of methanol: water (30:70, v/v) plus 2.0% glacial acetic acid at a flow rate of 0.8 mL min(-1). Genistein was used as internal standard. The mass spectrometer was operated in positive ionization mode and analytes were quantified by multiple reaction monitoring at m/z 595 > 457 for vicenin-2 and m/z 271 > 153 for internal standard. Prior to the analysis, each rat plasma sample was acidified with 200 mu L of 50 mmol L(-1) acetic acid solution and extracted by solid-phase extraction using a C18 cartridge. The absolute recoveries were reproducible and the coefficients of variation values were lower than 5.2%. The method was linear over the 12.5 - 1500 ng mL(-1) concentration range and the quantification limit was 12.5 ng mL(-1). Within-day and between-day assay precision and accuracy were studied at three concentration levels (40, 400 and 800 ng mL(-1)) and were lower than 15%. The developed and validated method seems to be suitable for analysis of vicenin-2 in plasma samples obtained from rats that receive a single i.p. dose of 200 mg kg(-1) vicenin-2 extract.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this investigation was to assess the diagnostic accuracy of intraoperative cultures for the early identification of patients who are at risk of infection after primary total hip arthroplasty. Four or six swabs were obtained immediately before the wound closure in 263 primary total hip replacements. Patients with a maximum of one positive culture were denoted as patients with a normal profile and did not receive any treatment. Patients with two or more positive cultures, with the same organism identified, were denoted as patients with a risk profile and received treatment with a specific antibiotic as determined by the antibiogram for six weeks. The follow-up ranged from a minimum of one year to five years and eleven months, concentrating on the presence or absence of infection, which was defined as discharge of pus through the surgical wound or as a fistula at any time after surgery. The accuracy of this procedure ( number of cases correctly identified in relation to the total number of cases) in the group of 152 arthroplasties in which 4 swabs per patient were collected was 96%. In the group of 111 arthroplasties in which 6 swabs per patient were collected the accuracy was 95.5%. We conclude that the collection of swabs under the conditions described is a method of high accuracy ( above 95%) for the evaluation of risk of infection after primary total hip arthroplasty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subjective interpretation of dobutamine echocardiography (DBE) makes the accuracy of this technique dependent on the experience of the observer, and also poses problems of concordance between observers. Myocardial tissue Doppler velocity (MDV) may offer a quantitative technique for identification of coronary artery disease, but it is unclear whether this parameter could improve the results of less expert readers and in segments with low interobserver concordance. The aim of this study was to find whether MDV improved the accuracy of wall motion scoring in novice readers, experienced echocardiographers, and experts in stress echocardiography, and to identify the optimal means of integrating these tissue Doppler data in 77 patients who underwent DBE and angiography. New or worsening abnormalities were identified as ischemia and abnormalities seen at rest as scarring. Segmental MDV was measured independently and previously derived cutoffs were applied to categorize segments as normal or ab normal. Five strategies were used to combine MDV and wall motion score, and the results of each reader using each strategy were compared with quantitative coronary angiography. The accuracy of wall motion scoring by novice (68 +/- 3%) and experienced echocardiographers (71 +/- 3%) was less than experts in stress echocardiography (88 +/- 3%, p < 0.001). Various strategies for integration with MDV significantly improved the accuracy of wall motion scoring by novices from 75 +/- 2% to 77 +/- 5% (p < 0.01). Among the experienced group, accuracy improved from 74 +/- 2% to 77 +/- 5% (p < 0.05), but in the experts, no improvement was seen from their baseline accuracy. Integration with MDV also improved discordance related to the basal segments. Thus, use of MDV in all segments or MDV in all segments with wall motion scoring in the apex offers an improvement in sensitivity and accuracy with minimal compromise in specificity. (C) 2001 by Excerpta Medica, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY Infection by Candidaspp. is associated with high mortality rates, especially when treatment is not appropriate and/or not immediate. Therefore, it is necessary to correctly identify the genus and species of Candida. The aim of this study was to compare the identification of 89 samples of Candida spp. by the manual methods germ tube test, auxanogram and chromogenic medium in relation to the ID 32C automated method. The concordances between the methods in ascending order, measured by the Kappa index were: ID 32C with CHROMagar Candida(κ = 0.38), ID 32C with auxanogram (κ = 0.59) and ID 32C with germ tube (κ = 0.9). One of the species identified in this study was C. tropicalis,which demonstrated a sensitivity of 46.2%, a specificity of 95.2%, PPV of 80%, NPV of 81.1%, and an accuracy of 80.9% in tests performed with CHROMagar Candida;and a sensitivity of 76.9%, a specificity of 96.8%, PPV of 90.9%, NPV of 91%, and an accuracy of 91% in the auxanogram tests. Therefore, it is necessary to know the advantages and limitations of methods to choose the best combination between them for a fast and correct identification of Candidaspecies.