10 resultados para linear machine modeling

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An accurate assessment of the computer skills of students is a pre-requisite for the success of any e-learning interventions. The aim of the present study was to assess objectively the computer literacy and attitudes in a group of Greek post-graduate students, using a task-oriented questionnaire developed and validated in the University of Malmö, Sweden. 50 post-graduate students in the Athens University School of Dentistry in April 2005 took part in the study. A total competence score of 0-49 was calculated. Socio-demographic characteristics were recorded. Attitudes towards computer use were assessed. Descriptive statistics and linear regression modeling were employed for data analysis. Total competence score was normally distributed (Shapiro-Wilk test: W = 0.99, V = 0.40, P = 0.97) and ranged from 5 to 42.5, with a mean of 22.6 (+/-8.4). Multivariate analysis revealed 'gender', 'e-mail ownership' and 'enrollment in non-clinical programs' as significant predictors of computer literacy. Conclusively, computer literacy of Greek post-graduate dental students was increased amongst males, students in non-clinical programs and those with more positive attitudes towards the implementation of computer assisted learning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES Accurate trial reporting facilitates evaluation and better use of study results. The objective of this article is to investigate the quality of reporting of randomized controlled trials (RCTs) in leading orthodontic journals, and to explore potential predictors of improved reporting. METHODS The 50 most recent issues of 4 leading orthodontic journals until November 2013 were electronically searched. Reporting quality assessment was conducted using the modified CONSORT statement checklist. The relationship between potential predictors and the modified CONSORT score was assessed using linear regression modeling. RESULTS 128 RCTs were identified with a mean modified CONSORT score of 68.97% (SD = 11.09). The Journal of Orthodontics (JO) ranked first in terms of completeness of reporting (modified CONSORT score 76.21%, SD = 10.1), followed by American Journal of Orthodontics and Dentofacial Orthopedics (AJODO) (73.05%, SD = 10.1). Journal of publication (AJODO: β = 10.08, 95% CI: 5.78, 14.38; JO: β = 16.82, 95% CI: 11.70, 21.94; EJO: β = 7.21, 95% CI: 2.69, 11.72 compared to Angle), year of publication (β = 0.98, 95% CI: 0.28, 1.67 for each additional year), region of authorship (Europe: β = 5.19, 95% CI: 1.30, 9.09 compared to Asia/other), statistical significance (significant: β = 3.10, 95% CI: 0.11, 6.10 compared to non-significant) and methodologist involvement (involvement: β = 5.60, 95% CI: 1.66, 9.54 compared to non-involvement) were all significant predictors of improved modified CONSORT scores in the multivariable model. Additionally, median overall Jadad score was 2 (IQR = 2) across journals, with JO (median = 3, IQR = 1) and AJODO (median = 3, IQR = 2) presenting the highest score values. CONCLUSION The reporting quality of RCTs published in leading orthodontic journals is considered suboptimal in various CONSORT areas. This may have a bearing in trial result interpretation and use in clinical decision making and evidence- based orthodontic treatment interventions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AIM Abstracts of randomized clinical trials are extremely important as trial appraisal is often based on the information included here. The objective of this study was to assess the quality of the reporting of RCT abstracts in journals of Oral Implantology. MATERIAL AND METHODS Six leading Implantology journals were screened for RCTs between years 2008 and 2012. A 21-item modified CONSORT for abstracts checklist was used to examine the completeness of abstract reporting. Descriptive statistics and linear regression modeling were employed for data analysis. RESULTS One hundred and sixty three RCT abstracts were included in this study. The majority of the RCTs were published in the Clinical Oral Implants Research (42.9%). The mean overall reporting quality score was 58.6% (95% CI: 57.6-59.7). The highest score was noted in the European Journal of Oral Implantology (63.8%; 95% CI: 61.8-65.8). Multivariate analysis demonstrated that abstract quality score was related to publication journal and number of research centers involved. Most abstracts adequately reported interventions (89.0%), objectives (77.9%) and conclusions (74.8%) while failed to report randomization procedures, allocation concealment, effect estimate, confidence intervals, and funding. Registration of RCTs was not reported in any of the abstracts. CONCLUSIONS The reporting quality in abstracts of RCTs published in Oral Implantology journals needs to be improved. Editors and authors should be encouraged to endorse the CONSORT for abstracts guidelines in order to achieve optimal quality in abstract reporting.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Finite element (FE) analysis is an important computational tool in biomechanics. However, its adoption into clinical practice has been hampered by its computational complexity and required high technical competences for clinicians. In this paper we propose a supervised learning approach to predict the outcome of the FE analysis. We demonstrate our approach on clinical CT and X-ray femur images for FE predictions ( FEP), with features extracted, respectively, from a statistical shape model and from 2D-based morphometric and density information. Using leave-one-out experiments and sensitivity analysis, comprising a database of 89 clinical cases, our method is capable of predicting the distribution of stress values for a walking loading condition with an average correlation coefficient of 0.984 and 0.976, for CT and X-ray images, respectively. These findings suggest that supervised learning approaches have the potential to leverage the clinical integration of mechanical simulations for the treatment of musculoskeletal conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electron Monte Carlo (eMC) dose calculation algorithm available in the Eclipse treatment planning system (Varian Medical Systems) is based on the macro MC method and uses a beam model applicable to Varian linear accelerators. This leads to limitations in accuracy if eMC is applied to non-Varian machines. In this work eMC is generalized to also allow accurate dose calculations for electron beams from Elekta and Siemens accelerators. First, changes made in the previous study to use eMC for low electron beam energies of Varian accelerators are applied. Then, a generalized beam model is developed using a main electron source and a main photon source representing electrons and photons from the scattering foil, respectively, an edge source of electrons, a transmission source of photons and a line source of electrons and photons representing the particles from the scrapers or inserts and head scatter radiation. Regarding the macro MC dose calculation algorithm, the transport code of the secondary particles is improved. The macro MC dose calculations are validated with corresponding dose calculations using EGSnrc in homogeneous and inhomogeneous phantoms. The validation of the generalized eMC is carried out by comparing calculated and measured dose distributions in water for Varian, Elekta and Siemens machines for a variety of beam energies, applicator sizes and SSDs. The comparisons are performed in units of cGy per MU. Overall, a general agreement between calculated and measured dose distributions for all machine types and all combinations of parameters investigated is found to be within 2% or 2 mm. The results of the dose comparisons suggest that the generalized eMC is now suitable to calculate dose distributions for Varian, Elekta and Siemens linear accelerators with sufficient accuracy in the range of the investigated combinations of beam energies, applicator sizes and SSDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the impact of red blood cell (RBC) Life-spans in some disease areas such as diabetes or anemia of chronic kidney disease, there is no consensus on how to quantitatively best describe the process. Several models have been proposed to explain the elimination process of RBCs: random destruction process, homogeneous life-span model, or a series of 4-transit compartment model. The aim of this work was to explore the different models that have been proposed in literature, and modifications to those. The impact of choosing the right model on future outcomes prediction--in the above mentioned areas--was also investigated. Both data from indirect (clinical data) and direct life-span measurement (biotin-labeled data) methods were analyzed using non-linear mixed effects models. Analysis showed that: (1) predictions from non-steady state data will depend on the RBC model chosen; (2) the transit compartment model, which considers variation in life-span in the RBC population, better describes RBC survival data than the random destruction or homogenous life-span models; and (3) the additional incorporation of random destruction patterns, although improving the description of the RBC survival data, does not appear to provide a marked improvement when describing clinical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.