924 resultados para Accuracy and precision
Resumo:
Marketers have long looked for observables that could explain differences in consumer behavior. Initial attempts have centered on demographic factors, such as age, gender, and race. Although such variables are able to provide some useful information for segmentation (Bass, Tigert, and Longdale 1968), more recent studies have shown that variables that tap into consumers’ social classes and personal values have more predictive accuracy and also provide deeper insights into consumer behavior. I argue that one demographic construct, religion, merits further consideration as a factor that has a profound impact on consumer behavior. In this dissertation, I focus on two types of religious guidance that may influence consumer behaviors: religious teachings (being content with one’s belongings), and religious problem-solving styles (reliance on God).
Essay 1 focuses on the well-established endowment effect and introduces a new moderator (religious teachings on contentment) that influences both owner and buyers’ pricing behaviors. Through fifteen experiments, I demonstrate that when people are primed with religion or characterized by stronger religious beliefs, they tend to value their belongings more than people who are not primed with religion or who have weaker religious beliefs. These effects are caused by religious teachings on being content with one’s belongings, which lead to the overvaluation of one’s own possessions.
Essay 2 focuses on self-control behaviors, specifically healthy eating, and introduces a new moderator (God’s role in the decision-making process) that determines the relationship between religiosity and the healthiness of food choices. My findings demonstrate that consumers who indicate that they defer to God in their decision-making make unhealthier food choices as their religiosity increases. The opposite is true for consumers who rely entirely on themselves. Importantly, this relationship is mediated by the consumer’s consideration of future consequences. This essay provides an explanation to the existing mixed findings on the relationship between religiosity and obesity.
Resumo:
Introduction: Computer-Aided-Design (CAD) and Computer-Aided-Manufacture (CAM) has been developed to fabricate fixed dental restorations accurately, faster and improve cost effectiveness of manufacture when compared to the conventional method. Two main methods exist in dental CAD/CAM technology: the subtractive and additive methods. While fitting accuracy of both methods has been explored, no study yet has compared the fabricated restoration (CAM output) to its CAD in terms of accuracy. The aim of this present study was to compare the output of various dental CAM routes to a sole initial CAD and establish the accuracy of fabrication. The internal fit of the various CAM routes were also investigated. The null hypotheses tested were: 1) no significant differences observed between the CAM output to the CAD and 2) no significant differences observed between the various CAM routes. Methods: An aluminium master model of a standard premolar preparation was scanned with a contact dental scanner (Incise, Renishaw, UK). A single CAD was created on the scanned master model (InciseCAD software, V2.5.0.140, UK). Twenty copings were then fabricated by sending the single CAD to a multitude of CAM routes. The copings were grouped (n=5) as: Laser sintered CoCrMo (LS), 5-axis milled CoCrMo (MCoCrMo), 3-axis milled zirconia (ZAx3) and 4-axis milled zirconia (ZAx4). All copings were micro-CT scanned (Phoenix X-Ray, Nanotom-S, Germany, power: 155kV, current: 60µA, 3600 projections) to produce 3-Dimensional (3D) models. A novel methodology was created to superimpose the micro-CT scans with the CAD (GOM Inspect software, V7.5SR2, Germany) to indicate inaccuracies in manufacturing. The accuracy in terms of coping volume was explored. The distances from the surfaces of the micro-CT 3D models to the surfaces of the CAD model (CAD Deviation) were investigated after creating surface colour deviation maps. Localised digital sections of the deviations (Occlusal, Axial and Cervical) and selected focussed areas were then quantitatively measured using software (GOM Inspect software, Germany). A novel methodology was also explored to digitally align (Rhino software, V5, USA) the micro-CT scans with the master model to investigate internal fit. Fifty digital cross sections of the aligned scans were created. Point-to-point distances were measured at 5 levels at each cross section. The five levels were: Vertical Marginal Fit (VF), Absolute Marginal Fit (AM), Axio-margin Fit (AMF), Axial Fit (AF) and Occlusal Fit (OF). Results: The results of the volume measurement were summarised as: VM-CoCrMo (62.8mm3 ) > VZax3 (59.4mm3 ) > VCAD (57mm3 ) > VZax4 (56.1mm3 ) > VLS (52.5mm3 ) and were all significantly different (p presented as areas with different colour. No significant differences were observed at the internal aspect of the cervical aspect between all groups of copings. Significant differences (p< M-CoCrMo Internal Occlusal, Internal Axial and External Axial 2 ZAx3 > ZAx4 External Occlusal, External Cervical 3 ZAx3 < ZAx4 Internal Occlusal 4 M-CoCrMo > ZAx4 Internal Occlusal and Internal Axial The mean values of AMF and AF were significantly (p M-CoCrMo and CAD > ZAx4. Only VF of M-CoCrMo was comparable with the CAD Internal Fit. All VF and AM values were within the clinically acceptable fit (120µm). Conclusion: The investigated CAM methods reproduced the CAD accurately at the internal cervical aspect of the copings. However, localised deviations at axial and occlusal aspects of the copings may suggest the need for modifications in these areas prior to fitting and veneering with porcelain. The CAM groups evaluated also showed different levels of Internal Fit thus rejecting the null hypotheses. The novel non-destructive methodologies for CAD/CAM accuracy and internal fit testing presented in this thesis may be a useful evaluation tool for similar applications.
Resumo:
Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.
Resumo:
Contrary to interviewing guidelines, a considerable portion of witness interviews are not recorded. Investigators’ memory, their interview notes, and any subsequent interview reports therefore become important pieces of evidence; the accuracy of interviewers’ memory or such reports is therefore of crucial importance when interviewers testify in court regarding witness interviews. A detailed recollection of the actual exchange during such interviews and how information was elicited from the witness will allow for a better assessment of statement veracity in court. Two studies were designed to examine interviewers’ memory for a prior witness interview. Study One varied interviewer note-taking and type of subsequent interview report written by interviewers by including a sample of undergraduates and implementing a two-week delay between interview and recall. Study Two varied levels of interviewing experience in addition to report type and note-taking by comparing experienced police interviewers to a student sample. Participants interviewed a mock witness about a crime, while taking notes or not, and wrote an interview report two weeks later (Study One) or immediately after (Study Two). Interview reports were written either in a summarized format, which asked interviewers for a summary of everything that occurred during the interview, or verbatim format, which asked interviewers to record in transcript format the questions they asked and the witness’s responses. Interviews were videotaped and transcribed. Transcriptions were compared to interview reports to score for accuracy and omission of interview content. Results from both studies indicate that much interview information is lost between interview and report especially after a two-week delay. The majority of information reported by interviewers is accurate, although even interviewers who recalled information immediately after still reported a troubling amount of inaccurate information. Note-taking was found to increase accuracy and completeness of interviewer reports especially after a two week delay. Report type only influenced recall of interviewer questions. Experienced police interviewers were not any better at recalling a prior witness interview than student interviewers. Results emphasize the need to record witness interviews to allow for more accurate and complete interview reconstruction by interviewers, even if interview notes are available.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
The need for continuous recording rain gauges makes it difficult to determine the rainfall erosivity factor (R-factor) of the (R)USLE model in areas without good temporal data coverage. In mainland Spain, the Nature Conservation Institute (ICONA) determined the R-factor at few selected pluviographs, so simple estimates of the R-factor are definitely of great interest. The objectives of this study were: (1) to identify a readily available estimate of the R-factor for mainland Spain; (2) to discuss the applicability of a single (global) estimate based on analysis of regional results; (3) to evaluate the effect of record length on estimate precision and accuracy; and (4) to validate an available regression model developed by ICONA. Four estimators based on monthly precipitation were computed at 74 rainfall stations throughout mainland Spain. The regression analysis conducted at a global level clearly showed that modified Fournier index (MFI) ranked first among all assessed indexes. Applicability of this preliminary global model across mainland Spain was evaluated by analyzing regression results obtained at a regional level. It was found that three contiguous regions of eastern Spain (Catalonia, Valencian Community and Murcia) could have a different rainfall erosivity pattern, so a new regression analysis was conducted by dividing mainland Spain into two areas: Eastern Spain and plateau-lowland area. A comparative analysis concluded that the bi-areal regression model based on MFI for a 10-year record length provided a simple, precise and accurate estimate of the R-factor in mainland Spain. Finally, validation of the regression model proposed by ICONA showed that R-ICONA index overpredicted the R-factor by approximately 19%.
Resumo:
This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.
Resumo:
AIMS: Mutation detection accuracy has been described extensively; however, it is surprising that pre-PCR processing of formalin-fixed paraffin-embedded (FFPE) samples has not been systematically assessed in clinical context. We designed a RING trial to (i) investigate pre-PCR variability, (ii) correlate pre-PCR variation with EGFR/BRAF mutation testing accuracy and (iii) investigate causes for observed variation. METHODS: 13 molecular pathology laboratories were recruited. 104 blinded FFPE curls including engineered FFPE curls, cell-negative FFPE curls and control FFPE tissue samples were distributed to participants for pre-PCR processing and mutation detection. Follow-up analysis was performed to assess sample purity, DNA integrity and DNA quantitation. RESULTS: Rate of mutation detection failure was 11.9%. Of these failures, 80% were attributed to pre-PCR error. Significant differences in DNA yields across all samples were seen using analysis of variance (p
Resumo:
INTRODUCTION: The dichotomization of non-small cell carcinoma (NSCLC) subtype into squamous (SQCC) and adenocarcinoma (ADC) has become important in recent years and is increasingly required with regard to management. The aim of this study was to determine the utility of a panel of commercially available antibodies in refining the diagnosis on small biopsies and also to determine whether cytologic material is suitable for somatic EGFR genotyping in a prospectively analyzed series of patients undergoing investigation for suspected lung cancer. METHODS: Thirty-two consecutive cases of NSCLC were first tested using a panel comprising cytokeratin 5/6, P63, thyroid transcription factor-1, 34betaE12, and a D-PAS stain for mucin, to determine their value in refining diagnosis of NSCLC. After this test phase, two further pathologists independently reviewed the cases using a refined panel that excluded 34betaE12 because of its low specificity for SQCC, and refinement of diagnosis and concordance were assessed. Ten cases of ADC, including eight derived from cytologic samples, were sent for EGFR mutation analysis. RESULTS: There was refinement of diagnosis in 65% of cases of NSCLC to either SQCC or ADC in the test phase. This included 10 of 13 cases where cell pellets had been prepared from transbronchial needle aspirates. Validation by two further pathologists with varying expertise in lung pathology confirmed increased refinement and concordance of diagnosis. All samples were adequate for analysis, and they all showed a wild-type EGFR genotype. CONCLUSION: A panel comprising cytokeratin 5/6, P63, thyroid transcription factor-1, and a D-PAS stain for mucin increases diagnostic accuracy and agreement between pathologists when faced with refining a diagnosis of NSCLC to SQCC or ADC. These small samples, even cell pellets derived from transbronchial needle aspirates, seem to be adequate for EGFR mutation analysis.
Resumo:
Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research.
Essential information is often not provided in study reports, impeding the identification, critical
appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy
studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here
we present STARD 2015, an updated list of 30 essential items that should be included in every
report of a diagnostic accuracy study. This update incorporates recent evidence about sources of
bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such,
STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy
studies.
Resumo:
[EN]In visual surveillance face detection can be an important cue for initializing tracking algorithms. Recent work in psychophics hints at the importance of the local context of a face for robust detection, such as head contours and torso. This paper describes a detector that actively utilizes the idea of local context. The promise is to gain robustness that goes beyond the capabilities of traditional face detection making it particularly interesting for surveillance. The performance of the proposed detector in terms of accuracy and speed is evaluated on data sets from PETS 2000 and PETS 2003 and compared to the object-centered approach. Particular attention is paid to the role of available image resolution.
Resumo:
Laser scanning is a terrestrial laser-imaging system that creates highly accurate three-dimensional images of objects for use in standard computer-aided design software packages. This report describes results of a pilot study to investigate the use of laser scanning for transportation applications in Iowa. After an initial training period on the use of the scanner and Cyclone software, pilot tests were performed on the following projects: intersection and railroad bridge for training purposes; section of highway to determine elevation accuracy and pair of bridges to determine level of detail that can be captured; new concrete pavement to determine smoothness; bridge beams to determine camber for deck-loading calculations; stockpile to determine volume; and borrow pit to determine volume. Results show that it is possible to obtain 2-6 mm precision with the laser scanner as claimed by the manufacturer compared to approximately one-inch precision with aerial photogrammetry using a helicopter. A cost comparison between helicopter photogrammetry and laser scanning showed that laser scanning was approximately 30 percent higher in cost depending on assumptions. Laser scanning can become more competitive to helicopter photogrammetry by elevating the scanner on a boom truck and capturing both sides of a divided roadway at the same time. Two- and three-dimensional drawings were created in MicroStation for one of the scanned highway bridges. It was demonstrated that it is possible to create such drawings within the accuracy of this technology. It was discovered that a significant amount of time is necessary to convert point cloud images into drawings. As this technology matures, this task should become less time consuming. It appears that laser scanning technology does indeed have a place in the Iowa Department of Transportation design and construction toolbox. Based on results from this study, laser scanning can be used cost effectively for preliminary surveys to develop TIN meshes of roadway surfaces. It also appears that this technique can be used quite effectively to measure bridge beam camber in a safer and quicker fashion compared to conventional approaches. Volume calculations are also possible using laser scanning. It seems that measuring quantities of rock could be an area where this technology would be quite beneficial since accuracy is more important with this material compared to soil. Other applications for laser scanning could include developing as-built drawings of historical structures such as the bridges of Madison County. This technology could also be useful where safety is a concern such as accurately measuring the surface of a highway active with traffic or scanning the underside of a bridge damaged by a truck. It is recommended that the Iowa Department of Transportation initially rent the scanner when it is needed and purchase the software. With time, it may be cost justifiable to purchase the scanner as well. Laser scanning consultants can be hired as well but at a higher cost.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08