976 resultados para Validation par connaissance expert


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precise protein quantification is essential in clinical dietetics, particularly in the management of renal, burn and malnourished patients. The EP-10 was developed to expedite the estimation of dietary protein for nutritional assessment and recommendation. The main objective of this study was to compare the validity and efficacy of the EP-10 with the American Dietetic Association’s “Exchange List for Meal Planning” (ADA-7g) in quantifying dietary protein intake, against computerised nutrient analysis (CNA). Protein intake of 197 food records kept by healthy adult subjects in Singapore was determined thrice using three different methods – (1) EP-10, (2) ADA-7g and (3) CNA using SERVE program (Version 4.0). Assessments using the EP-10 and ADA-7g were performed by two assessors in a blind crossover manner while a third assessor performed the CNA. All assessors were blind to each other’s results. Time taken to assess a subsample (n=165) using the EP-10 and ADA-7g was also recorded. Mean difference in protein intake quantification when compared to the CNA was statistically non-significant for the EP-10 (1.4 ± 16.3 g, P = .239) and statistically significant for the ADA-7g (-2.2 ± 15.6 g, P = .046). Both the EP-10 and ADA-7g had clinically acceptable agreement with the CNA as determined via Bland-Altman plots, although it was found that EP-10 had a tendency to overestimate with protein intakes above 150 g. The EP-10 required significantly less time for protein intake quantification than the ADA-7g (mean time of 65 ± 36 seconds vs. 111 ± 40 seconds, P < .001). The EP-10 and ADA-7g are valid clinical tools for protein intake quantification in an Asian context, with EP-10 being more time efficient. However, a dietician’s discretion is needed when the EP-10 is used on protein intakes above 150g.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The lack of satisfactory consensus for characterizing the system intelligence and structured analytical decision models has inhibited the developers and practitioners to understand and configure optimum intelligent building systems in a fully informed manner. So far, little research has been conducted in this aspect. This research is designed to identify the key intelligent indicators, and develop analytical models for computing the system intelligence score of smart building system in the intelligent building. The integrated building management system (IBMS) was used as an illustrative example to present a framework. The models presented in this study applied the system intelligence theory, and the conceptual analytical framework. A total of 16 key intelligent indicators were first identified from a general survey. Then, two multi-criteria decision making (MCDM) approaches, the analytic hierarchy process (AHP) and analytic network process (ANP), were employed to develop the system intelligence analytical models. Top intelligence indicators of IBMS include: self-diagnostic of operation deviations; adaptive limiting control algorithm; and, year-round time schedule performance. The developed conceptual framework was then transformed to the practical model. The effectiveness of the practical model was evaluated by means of expert validation. The main contribution of this research is to promote understanding of the intelligent indicators, and to set the foundation for a systemic framework that provide developers and building stakeholders a consolidated inclusive tool for the system intelligence evaluation of the proposed components design configurations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underlying objective of this study was to develop a novel approach to evaluate the potential for commercialisation of a new technology. More specifically, this study examined the 'ex-ante'. evaluation of the technology transfer process. For this purpose, a technology originating from the high technology sector was used. The technology relates to the application of software for the detection of weak signals from space, which is an established method of signal processing in the field of radio astronomy. This technology has the potential to be used in commercial and industrial areas other than astronomy, such as detecting water leakages in pipes. Its applicability to detecting water leakage was chosen owing to several problems with detection in the industry as well as the impact it can have on saving water in the environment. This study, therefore, will demonstrate the importance of interdisciplinary technology transfer. The study employed both technical and business evaluation methods including laboratory experiments and the Delphi technique to address the research questions. There are several findings from this study. Firstly, scientific experiments were conducted and these resulted in a proof of concept stage of the chosen technology. Secondly, validation as well as refinement of criteria from literature that can be used for „ex-ante. evaluation of technology transfer has been undertaken. Additionally, after testing the chosen technology.s overall transfer potential using the modified set of criteria, it was found that the technology is still in its early stages and will require further development for it to be commercialised. Furthermore, a final evaluation framework was developed encompassing all the criteria found to be important. This framework can help in assessing the overall readiness of the technology for transfer as well as in recommending a viable mechanism for commercialisation. On the whole, the commercial potential of the chosen technology was tested through expert opinion, thereby focusing on the impact of a new technology and the feasibility of alternate applications and potential future applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To compare measurements of retinal thickness (RT) and choroidal thickness (ChT) obtained with an optical low coherence reflectometry (OLCR) biometer (Lenstar LS 900) with those obtained with a spectral domain optical coherence tomographer (SD OCT) (Copernicus SOCT HR) in young normal subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RAP-A was developed to meet the need for a universal resilience building program for teenagers which could be readily implemented in a school setting. A universal program targets all teenagers in a particular grade as opposed to those at higher risk for depression (indicated or selective approaches) or a treatment group. It is easier to recruit and engage adolescents in a universal approach where students do not face the risk of stigmatisation by being singled out for intervention. The Resourceful Adolescent Program (RAP: Shochet, Holland & Whitefield, 1997) was developed to meet this need.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The RAP-A Workbook comprises all of the handouts required for the program's individual and group activities. A Participant Workbook is required for each adolescent to write in, and keep at the end of the program.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to perform autonomous emergency (forced) landings is one of the key technology enablers identified for UAS. This paper presents the flight test results of forced landings involving a UAS, in a controlled environment, and which was conducted to ascertain the performances of previously developed (and published) path planning and guidance algorithms. These novel 3-D nonlinear algorithms have been designed to control the vehicle in both the lateral and longitudinal planes of motion. These algorithms have hitherto been verified in simulation. A modified Boomerang 60 RC aircraft is used as the flight test platform, with associated onboard and ground support equipment sourced Off-the-Shelf or developed in-house at the Australian Research Centre for Aerospace Automation(ARCAA). HITL simulations were conducted prior to the flight tests and displayed good landing performance, however, due to certain identified interfacing errors, the flight results differed from that obtained in simulation. This paper details the lessons learnt and presents a plausible solution for the way forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Patients with chest pain contribute substantially to emergency department attendances, lengthy hospital stay, and inpatient admissions. A reliable, reproducible, and fast process to identify patients presenting with chest pain who have a low short-term risk of a major adverse cardiac event is needed to facilitate early discharge. We aimed to prospectively validate the safety of a predefined 2-h accelerated diagnostic protocol (ADP) to assess patients presenting to the emergency department with chest pain symptoms suggestive of acute coronary syndrome. Methods: This observational study was undertaken in 14 emergency departments in nine countries in the Asia-Pacific region, in patients aged 18 years and older with at least 5 min of chest pain. The ADP included use of a structured pre-test probability scoring method (Thrombolysis in Myocardial Infarction [TIMI] score), electrocardiograph, and point-of-care biomarker panel of troponin, creatine kinase MB, and myoglobin. The primary endpoint was major adverse cardiac events within 30 days after initial presentation (including initial hospital attendance). This trial is registered with the Australia-New Zealand Clinical Trials Registry, number ACTRN12609000283279. Findings: 3582 consecutive patients were recruited and completed 30-day follow-up. 421 (11•8%) patients had a major adverse cardiac event. The ADP classified 352 (9•8%) patients as low risk and potentially suitable for early discharge. A major adverse cardiac event occurred in three (0•9%) of these patients, giving the ADP a sensitivity of 99•3% (95% CI 97•9–99•8), a negative predictive value of 99•1% (97•3–99•8), and a specificity of 11•0% (10•0–12•2). Interpretation: This novel ADP identifies patients at very low risk of a short-term major adverse cardiac event who might be suitable for early discharge. Such an approach could be used to decrease the overall observation periods and admissions for chest pain. The components needed for the implementation of this strategy are widely available. The ADP has the potential to affect health-service delivery worldwide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During nutrition intervention programs, some form of dietary assessment is usually necessary. This dietary assessment can be for: initial screening; development of appropriate programs and activities; or, evaluation. Established methods of dietary assessment are not always practical, nor cost effective in such interventions, therefore an abbreviated dietary assessment tool is needed. The Queensland Nutrition Project developed such a tool for male Blue Collar Workers, the Food Behaviour Questionnaire, consisting of 27 food behaviour related questions. This tool has been validated in a sample of 23 men, through full dietary assessment obtained via food frequency questionnaires and 24 hour dietary recalls. Those questions which correlated poorly with the full dietary assessment were deleted from the tool. In all, 13 questions was all that was required to distinguish between high and low dietary intakes of particular nutrients. Three questions when combined had correlations with refined sugar between 0.617 and 0.730 (p<0.005); four questions when combined had correlations with dietary fibre as percentage of energy of 0.45 (p<0.05); five questions when combined had a correlation with total fat of 0.499 (p<0.05); and, 4 questions when combined had a correlation with saturated fat of between 0.451 and 0.589 (p<0.05). A significant correlation could not be found for food behaviour questions with respect to dietary sodium. Correlations for fat as a function of energy could not be found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Evidence-based practice (EBP) is embraced internationally as an ideal approach to improve patient outcomes and provide cost-effective care. However, despite the support for and apparent benefits of evidence-based practice, it has been shown to be complex and difficult to incorporate into the clinical setting. Research exploring implementation of evidence-based practice has highlighted many internal and external barriers including clinicians’ lack of knowledge and confidence to integrate EBP into their day-to-day work. Nurses in particular often feel ill-equipped with little confidence to find, appraise and implement evidence. Aims: The following study aimed to undertake preliminary testing of the psychometric properties of tools that measure nurses’ self-efficacy and outcome expectancy in regard to evidence-based practice. Methods: A survey design was utilised in which nurses who had either completed an EBP unit or were randomly selected from a major tertiary referral hospital in Brisbane, Australia were sent two newly developed tools: 1) Self-efficacy in Evidence-Based Practice (SE-EBP) scale and 2) Outcome Expectancy for Evidence-Based Practice (OE-EBP) scale. Results: Principal Axis Factoring found three factors with eigenvalues above one for the SE-EBP explaining 73% of the variance and one factor for the OE-EBP scale explaining 82% of the variance. Cronbach’s alpha for SE-EBP, three SE-EBP factors and OE-EBP were all >.91 suggesting some item redundancy. The SE-EBP was able to distinguish between those with no prior exposure to EBP and those who completed an introductory EBP unit. Conclusions: While further investigation of the validity of these tools is needed, preliminary testing indicates that the SE-EBP and OE-EBP scales are valid and reliable instruments for measuring health professionals’ confidence in the process and the outcomes of basing their practice on evidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background When large scale trials are investigating the effects of interventions on appetite, it is paramount to efficiently monitor large amounts of human data. The original hand-held Electronic Appetite Ratings System (EARS) was designed to facilitate the administering and data management of visual analogue scales (VAS) of subjective appetite sensations. The purpose of this study was to validate a novel hand-held method (EARS II (HP® iPAQ)) against the standard Pen and Paper (P&P) method and the previously validated EARS. Methods Twelve participants (5 male, 7 female, aged 18-40) were involved in a fully repeated measures design. Participants were randomly assigned in a crossover design, to either high fat (>48% fat) or low fat (<28% fat) meal days, one week apart and completed ratings using the three data capture methods ordered according to Latin Square. The first set of appetite sensations was completed in a fasted state, immediately before a fixed breakfast. Thereafter, appetite sensations were completed every thirty minutes for 4h. An ad libitum lunch was provided immediately before completing a final set of appetite sensations. Results Repeated measures ANOVAs were conducted for ratings of hunger, fullness and desire to eat. There were no significant differences between P&P compared with either EARS or EARS II (p > 0.05). Correlation coefficients between P&P and EARS II, controlling for age and gender, were performed on Area Under the Curve ratings. R2 for Hunger (0.89), Fullness (0.96) and Desire to Eat (0.95) were statistically significant (p < 0.05). Conclusions EARS II was sensitive to the impact of a meal and recovery of appetite during the postprandial period and is therefore an effective device for monitoring appetite sensations. This study provides evidence and support for further validation of the novel EARS II method for monitoring appetite sensations during large scale studies. The added versatility means that future uses of the system provides the potential to monitor a range of other behavioural and physiological measures often important in clinical and free living trials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Cohort studies can provide valuable evidence of cause and effect relationships but are subject to loss of participants over time, limiting the validity of findings. Computerised record linkage offers a passive and ongoing method of obtaining health outcomes from existing routinely collected data sources. However, the quality of record linkage is reliant upon the availability and accuracy of common identifying variables. We sought to develop and validate a method for linking a cohort study to a state-wide hospital admissions dataset with limited availability of unique identifying variables. Methods A sample of 2000 participants from a cohort study (n = 41 514) was linked to a state-wide hospitalisations dataset in Victoria, Australia using the national health insurance (Medicare) number and demographic data as identifying variables. Availability of the health insurance number was limited in both datasets; therefore linkage was undertaken both with and without use of this number and agreement tested between both algorithms. Sensitivity was calculated for a sub-sample of 101 participants with a hospital admission confirmed by medical record review. Results Of the 2000 study participants, 85% were found to have a record in the hospitalisations dataset when the national health insurance number and sex were used as linkage variables and 92% when demographic details only were used. When agreement between the two methods was tested the disagreement fraction was 9%, mainly due to "false positive" links when demographic details only were used. A final algorithm that used multiple combinations of identifying variables resulted in a match proportion of 87%. Sensitivity of this final linkage was 95%. Conclusions High quality record linkage of cohort data with a hospitalisations dataset that has limited identifiers can be achieved using combinations of a national health insurance number and demographic data as identifying variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The major limitation of current typing methods for Streptococcus pyogenes, such as emm sequence typing and T typing, is that these are based on regions subject to considerable selective pressure. Multilocus sequence typing (MLST) is a better indicator of the genetic backbone of a strain but is not widely used due to high costs. The objective of this study was to develop a robust and cost-effective alternative to S. pyogenes MLST. A 10-member single nucleotide polymorphism (SNP) set that provides a Simpson’s Index of Diversity (D) of 0.99 with respect to the S. pyogenes MLST database was derived. A typing format involving high-resolution melting (HRM) analysis of small fragments nucleated by each of the resolution-optimized SNPs was developed. The fragments were 59–119 bp in size and, based on differences in G+C content, were predicted to generate three to six resolvable HRM curves. The combination of curves across each of the 10 fragments can be used to generate a melt type (MelT) for each sequence type (ST). The 525 STs currently in the S. pyogenes MLST database are predicted to resolve into 298 distinct MelTs and the method is calculated to provide a D of 0.996 against the MLST database. The MelTs are concordant with the S. pyogenes population structure. To validate the method we examined clinical isolates of S. pyogenes of 70 STs. Curves were generated as predicted by G+C content discriminating the 70 STs into 65 distinct MelTs.