924 resultados para error model
Resumo:
The assessment of intellectual ability is a core competency in psychology. The results of intelligence tests have many potential implications and are used frequently as the basis for decisions about educational placements, eligibility for various services, and admission to specific groups. Given the importance of intelligence test scores, accurate test administration and scoring are essential; yet there is evidence of unacceptably high rates of examiner error. This paper discusses competency and postgraduate training in intelligence testing and presents a training model for postgraduate psychology students. The model aims to achieve high levels of competency in intelligence testing through a structured method of training, practice and feedback that incorporates peer support, self-reflection and multiple methods for evaluating competency.
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.
Resumo:
In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.
Resumo:
As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.
Resumo:
Car Following models have a critical role in all microscopic traffic simulation models. Current microscopic simulation models are unable to mimic the unsafe behaviour of drivers as most are based on presumptions about the safe behaviour of drivers. Gipps model is a widely used car following model embedded in different micro-simulation models. This paper examines the Gipps car following model to investigate ways of improving the model for safety studies application. The paper puts forward some suggestions to modify the Gipps model to improve its capabilities to simulate unsafe vehicle movements (vehicles with safety indicators below critical thresholds). The result of the paper is one step forward to facilitate assessing and predicting safety at motorways using microscopic simulation. NGSIM as a rich source of vehicle trajectory data for a motorway is used to extract its relatively risky events. Short following headways and Time To Collision are used to assess critical safety event within traffic flow. The result shows that the modified proposed car following to a certain extent predicts the unsafe trajectories with smaller error values than the generic Gipps model.
Resumo:
This paper establishes a practical stability result for discrete-time output feedback control involving mismatch between the exact system to be stabilised and the approximating system used to design the controller. The practical stability is in the sense of an asymptotic bound on the amount of error bias introduced by the model approximation, and is established using local consistency properties of the systems. Importantly, the practical stability established here does not require the approximating system to be of the same model type as the exact system. Examples are presented to illustrate the nature of our practical stability result.
Resumo:
The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.
Resumo:
In this paper, we examine the use of a Kalman filter to aid in the mission planning process for autonomous gliders. Given a set of waypoints defining the planned mission and a prediction of the ocean currents from a regional ocean model, we present an approach to determine the best, constant, time interval at which the glider should surface to maintain a prescribed tracking error, and minimizing time on the ocean surface. We assume basic parameters for the execution of a given mission, and provide the results of the Kalman filter mission planning approach. These results are compared with previous executions of the given mission scenario.
Resumo:
Many academic researchers have conducted studies on the selection of design-build (DB) delivery method; however, there are few studies on the selection of DB operational variations, which poses challenges to many clients. The selection of DB operational variation is a multi-criteria decision making process that requires clients to objectively evaluate the performance of each DB operational variation with reference to the selection criteria. This evaluation process is often characterized by subjectivity and uncertainty. In order to resolve this deficiency, the current investigation aimed to establish a fuzzy multicriteria decision-making (FMCDM) model for selecting the most suitable DB operational variation. A three-round Delphi questionnaire survey was conducted to identify the selection criteria and their relative importance. A fuzzy set theory approach, namely the modified horizontal approach with the bisector error method, was applied to establish the fuzzy membership functions, which enables clients to perform quantitative calculations on the performance of each DB operational variation. The FMCDM was developed using the weighted mean method to aggregate the overall performance of DB operational variations with regard to the selection criteria. The proposed FMCDM model enables clients to perform quantitative calculations in a fuzzy decision-making environment and provides a useful tool to cope with different project attributes.
Resumo:
Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.
Resumo:
A number of mathematical models investigating certain aspects of the complicated process of wound healing are reported in the literature in recent years. However, effective numerical methods and supporting error analysis for the fractional equations which describe the process of wound healing are still limited. In this paper, we consider numerical simulation of fractional model based on the coupled advection-diffusion equations for cell and chemical concentration in a polar coordinate system. The space fractional derivatives are defined in the Left and Right Riemann-Liouville sense. Fractional orders in advection and diffusion terms belong to the intervals (0; 1) or (1; 2], respectively. Some numerical techniques will be used. Firstly, the coupled advection-diffusion equations are decoupled to a single space fractional advection-diffusion equation in a polar coordinate system. Secondly, we propose a new implicit difference method for simulating this equation by using the equivalent of the Riemann-Liouville and Gr¨unwald-Letnikov fractional derivative definitions. Thirdly, its stability and convergence are discussed, respectively. Finally, some numerical results are given to demonstrate the theoretical analysis.
Resumo:
Passive air samplers (PAS) consisting of polyurethane foam (PUF) disks were deployed at 6 outdoor air monitoring stations in different land use categories (commercial, industrial, residential and semi-rural) to assess the spatial distribution of polybrominated diphenyl ethers (PBDEs) in the Brisbane airshed. Air monitoring sites covered an area of 1143 km2 and PAS were allowed to accumulate PBDEs in the city's airshed over three consecutive seasons commencing in the winter of 2008. The average sum of five (∑5) PBDEs (BDEs 28, 47, 99, 100 and 209) levels were highest at the commercial and industrial sites (12.7 ± 5.2 ng PUF−1), which were relatively close to the city center and were a factor of 8 times higher than residential and semi-rural sites located in outer Brisbane. To estimate the magnitude of the urban ‘plume’ an empirical exponential decay model was used to fit PAS data vs. distance from the CBD, with the best correlation observed when the particulate bound BDE-209 was not included (∑5-209) (r2 = 0.99), rather than ∑5 (r2 = 0.84). At 95% confidence intervals the model predicts that regardless of site characterization, ∑5-209 concentrations in a PAS sample taken between 4–10 km from the city centre would be half that from a sample taken from the city centre and reach a baseline or plateau (0.6 to 1.3 ng PUF−1), approximately 30 km from the CBD. The observed exponential decay in ∑5-209 levels over distance corresponded with Brisbane's decreasing population density (persons/km2) from the city center. The residual error associated with the model increased significantly when including BDE-209 levels, primarily due to the highest level (11.4 ± 1.8 ng PUF−1) being consistently detected at the industrial site, indicating a potential primary source at this site. Active air samples collected alongside the PAS at the industrial air monitoring site (B) indicated BDE-209 dominated congener composition and was entirely associated with the particulate phase. This study demonstrates that PAS are effective tools for monitoring citywide regional differences however, interpretation of spatial trends for POPs which are predominantly associated with the particulate phase such as BDE-209, may be restricted to identifying ‘hotspots’ rather than broad spatial trends.
Resumo:
Introduction QC and EQA are integral to good pathology laboratory practice. Medical Laboratory Science students undertake a project exploring internal QC and EQA procedures used in chemical pathology laboratories. Each student represents an individual lab and the class group represents the peer group of labs performing the same assay using the same method. Methods Using a manual BCG assay for serum albumin, normal and abnormal controls are run with a patient sample over 7 weeks. The QC results are assessed each week using calculated z-scores and both 2S & 3S control rules to determine whether a run is ‘in control’. At the end of the 7 weeks a completed LJ chart is assessed using the Westgard Multirules. Students investigate causes of error and the implications for both lab practice and patient care if runs are not ‘in control’. Twice in the 7 weeks two EQA samples (with target values unknown) are assayed alongside the weekly QC and patient samples. Results from each student are collated and form the basis of an EQA program. ALP are provided and students complete a Youden Plot, which is used to analyse the performance of each ‘lab’ and the method to identify bias. Students explore the concept of possible clinical implications of a biased method and address the actions that should be taken if a lab is not in consensus with the peer group. Conclusion This project is a model of ‘real world’ practice in which student demonstrate an understanding of the importance of QC procedures in a pathology laboratory, apply and interpret statistics and QC rules and charts, apply critical thinking and analytical skills to quality performance data to make recommendations for further practice and improve their technical competence and confidence.