918 resultados para automated model-based feedback
Resumo:
As the learning paradigm shifts to a more personalised learning process, users need dynamic feedback from their knowledge path. Learning Management Systems (LMS) offer customised feedback dependent on questions and the answers given. However these LMSs are not designed to generate personalised feedback for an individual learner, tutor and instructional designer. This paper presents an approach for generating constructive feedback for all stakeholders during a personalised learning process. The dynamic personalised feedback model generates feedback based on the learning objectives for the Learning Object. Feedback can be generated at Learning Object level and the Information Object level for both the individual learner and the group. The group feedback is meant for the tutors and instructional designer to improve the learning process.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude.
Resumo:
PURPOSE: To assess the acquisition of suture skills by training on ethylene-vinyl acetate bench model in novice medical students.METHODS: Sixteen medical students without previous surgery experience (novices) were randomly divided into two groups. During one hour group A trained sutures on ethylene-vinyl acetate (EVA) bench model with feedback of instructors, while group B (control) received a faculty-directed training based on books and instructional videos. All students underwent a both pre-and post-tests to perform two-and three-dimensional sutures on ox tongue. All recorded performances were evaluated by two blinded evaluators, using the Global Rating Scale.RESULTS: Although both groups have had a better performance (p<0.05) in the post-test when compared with the pre-test, the analysis of post-test showed that group A (EVA) had a better performance (p<0.05) when compared with group B (control).CONCLUSION: The ethylene vinyl acetate bench model allowed the novice students to acquire suture skills faster when compared to the traditional model of teaching.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The frequency spectrums are inefficiently utilized and cognitive radio has been proposed for full utilization of these spectrums. The central idea of cognitive radio is to allow the secondary user to use the spectrum concurrently with the primary user with the compulsion of minimum interference. However, designing a model with minimum interference is a challenging task. In this paper, a transmission model based on cyclic generalized polynomial codes discussed in [2] and [15], is proposed for the improvement in utilization of spectrum. The proposed model assures a non interference data transmission of the primary and secondary users. Furthermore, analytical results are presented to show that the proposed model utilizes spectrum more efficiently as compared to traditional models.
Resumo:
Robust and accurate identification of intervertebral discs from low resolution, sparse MRI scans is essential for the automated scan planning of the MRI spine scan. This paper presents a graphical model based solution for the detection of both the positions and orientations of intervertebral discs from low resolution, sparse MRI scans. Compared with the existing graphical model based methods, the proposed method does not need a training process using training data and it also has the capability to automatically determine the number of vertebrae visible in the image. Experiments on 25 low resolution, sparse spine MRI data sets verified its performance.
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
This paper presents an automated solution for precise detection of fiducial screws from three-dimensional (3D) Computerized Tomography (CT)/Digital Volume Tomography (DVT) data for image-guided ENT surgery. Unlike previously published solutions, we regard the detection of the fiducial screws from the CT/DVT volume data as a pose estimation problem. We thus developed a model-based solution. Starting from a user-supplied initialization, our solution detects the fiducial screws by iteratively matching a computer aided design (CAD) model of the fiducial screw to features extracted from the CT/DVT data. We validated our solution on one conventional CT dataset and on five DVT volume datasets, resulting in a total detection of 24 fiducial screws. Our experimental results indicate that the proposed solution achieves much higher reproducibility and precision than the manual detection. Further comparison shows that the proposed solution produces better results on the DVT dataset than on the conventional CT dataset.
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
INTRODUCTION Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. METHODS AND MATERIALS Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. RESULTS Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). CONCLUSIONS 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.
Resumo:
This paper proposed an automated 3D lumbar intervertebral disc (IVD) segmentation strategy from MRI data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based approach. After that, a three-dimensional (3D) variable-radius soft tube model of the lumbar spine column is built to guide the 3D disc segmentation. The disc segmentation is achieved as a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.
Resumo:
This paper proposed an automated three-dimensional (3D) lumbar intervertebral disc (IVD) segmentation strategy from Magnetic Resonance Imaging (MRI) data. Starting from two user supplied landmarks, the geometrical parameters of all lumbar vertebral bodies and intervertebral discs are automatically extracted from a mid-sagittal slice using a graphical model based template matching approach. Based on the estimated two-dimensional (2D) geometrical parameters, a 3D variable-radius soft tube model of the lumbar spine column is built by model fitting to the 3D data volume. Taking the geometrical information from the 3D lumbar spine column as constraints and segmentation initialization, the disc segmentation is achieved by a multi-kernel diffeomorphic registration between a 3D template of the disc and the observed MRI data. Experiments on 15 patient data sets showed the robustness and the accuracy of the proposed algorithm.