989 resultados para criterion-referenced assessment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessments of students in primary and secondary education are debated among practitioners, policy-makers, and parents. In some countries, assessment scores serve a criterion for passage between levels of education, for example, from secondary school to post-secondary education. Those practices are often traditions and while they come under criticism, they are a long-accepted part of the educational practices within a country. In those countries, the students’ assessment and examination scores are posted in public places or published in local news media. In other countries, assessments are used for the periodic checks on individual student progress. The results of assessments may be used for rating schools, and in some cases, they are used for evaluating the performance of teachers. Assessments are used less often to analyze student performance and make judgments regarding the performance of the curriculum. Even less often, assessments serve to critically establish strategies for the improvement of student learning and educational practices. The ends on the continuum of the assessment debate often focus on the opportunities that assessments present to improve education on one end. The other end is that assessments serve as a major distraction from the important work of teachers by removing classroom room time from instruction. The debate on those issues continues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a multi-criteria based approach for nondestructive diagnostic structural integrity assessment of a decommissioned flatbed rail wagon (FBRW) used for road bridge superstructure rehabilitation and replacement applications. First, full-scale vibration and static test data sets are employed in a FE model of the FBRW to obtain the best ‘initial’ estimate of the model parameters. Second, the ‘final’ model parameters are predicted using sensitivity-based perturbation analysis without significant difficulties encountered. Consequently, the updated FBRW model is validated using the independent sets of full-scale laboratory static test data. Finally, the updated and validated FE model of the FBRW is used for structural integrity assessment of a single lane FBRW bridge subjected to the Australian bridge design traffic load.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since ethnic differences exist in body composition, assessment methods need to be validated prior to use in different populations. This study attempts to validate the use of Sri Lankan based body composition assessment tools on a group of 5 - 15 year old Australian children of Sri Lankan origin. The study was conducted at the Body Composition Laboratory of the Children’s Nutrition Research Centre at the Royal Children’s Hospital, Brisbane, Australia. Height (Ht), weight (Wt), segmental length (Lsegment name) and skinfold thickness (SFT) were measured. The whole body and segmental bio impedance analysis (BIA) were also measured. The body composition determined by the deuterium dilution technique (criterion method) was compared with the assessments done using prediction equations developed on Sri Lankan children. 27 boys and 15 girls were studied. All predictions of body composition parameters, except percentage fat mass (FM) assessed by the SFT-FM equation in girls gave statistically significant correlations with the criterion method. They had a low mean bias and most were not influenced by the measured parameter. Although living in a different socioeconomic state, the equations developed on children of the same ethnic background gives a better predictive value of body composition. This highlights the ethnic influence on body composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photographic and image-based dietary records have limited evidence evaluating their performance and use among adults with a chronic disease. This study evaluated the performance of a mobile phone image-based dietary record, the Nutricam Dietary Assessment Method (NuDAM), in adults with type 2 diabetes mellitus (T2DM). Criterion validity was determined by comparing energy intake (EI) with total energy expenditure (TEE) measured by the doubly-labelled water technique. Relative validity was established by comparison to a weighed food record (WFR). Inter-rater reliability was assessed by comparing estimates of intake from three dietitians. Ten adults (6 males, age=61.2±6.9 years, BMI=31.0±4.5 kg/m2) participated. Compared to TEE, mean EI was under-reported using both methods, with a mean ratio of EI:TEE 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. There was moderate to high correlations between the NuDAM and WFR for energy (r=0.57), carbohydrate (r=0.63, p<0.05), protein (r=0.78, p<0.01) and alcohol (rs=0.85, p<0.01), with a weaker relationship for fat (r=0.24). Agreement between dietitians for nutrient intake for the 3-day NuDAM (ICC = 0.77-0.99) was marginally lower when compared with the 3-day WFR (ICC=0.82-0.99). All subjects preferred using the NuDAM and were willing to use it again for longer recording periods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficiency of analysis using generalized estimation equations is enhanced when intracluster correlation structure is accurately modeled. We compare two existing criteria (a quasi-likelihood information criterion, and the Rotnitzky-Jewell criterion) to identify the true correlation structure via simulations with Gaussian or binomial response, covariates varying at cluster or observation level, and exchangeable or AR(l) intracluster correlation structure. Rotnitzky and Jewell's approach performs better when the true intracluster correlation structure is exchangeable, while the quasi-likelihood criteria performs better for an AR(l) structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first line medication for mild to moderate Alzheimer s disease (AD) is based on cholinesterase inhibitors which prolong the effect of the neurotransmitter acetylcholine in cholinergic nerve synapses which relieves the symptoms of the disease. Implications of cholinesterases involvement in disease modifying processes has increased interest in this research area. The drug discovery and development process is a long and expensive process that takes on average 13.5 years and costs approximately 0.9 billion US dollars. Drug attritions in the clinical phases are common due to several reasons, e.g., poor bioavailability of compounds leading to low efficacy or toxic effects. Thus, improvements in the early drug discovery process are needed to create highly potent non-toxic compounds with predicted drug-like properties. Nature has been a good source for the discovery of new medicines accounting for around half of the new drugs approved to market during the last three decades. These compounds are direct isolates from the nature, their synthetic derivatives or natural mimics. Synthetic chemistry is an alternative way to produce compounds for drug discovery purposes. Both sources have pros and cons. The screening of new bioactive compounds in vitro is based on assaying compound libraries against targets. Assay set-up has to be adapted and validated for each screen to produce high quality data. Depending on the size of the library, miniaturization and automation are often requirements to reduce solvent and compound amounts and fasten the process. In this contribution, natural extract, natural pure compound and synthetic compound libraries were assessed as sources for new bioactive compounds. The libraries were screened primarily for acetylcholinesterase inhibitory effect and secondarily for butyrylcholinesterase inhibitory effect. To be able to screen the libraries, two assays were evaluated as screening tools and adapted to be compatible with special features of each library. The assays were validated to create high quality data. Cholinesterase inhibitors with various potencies and selectivity were found in natural product and synthetic compound libraries which indicates that the two sources complement each other. It is acknowledged that natural compounds differ structurally from compounds in synthetic compound libraries which further support the view of complementation especially if a high diversity of structures is the criterion for selection of compounds in a library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peel test methods are assessed through being applied to a peeling analysis of the ductile film/ceramic substrate system. Through computing the fracture work of the system using the either beam bend model (BB model) or the general plane analysis model (GPA model), surprisingly, a big difference between both model results is found. Although the BB model can capture the plastic dissipation phenomenon for the ductile film case as the GPA model can, it is much sensitive to the choice of the peeling criterion parameters, and it overestimates the plastic bending effect unable to capture crack tip constraint plasticity. In view of the difficulty of measuring interfacial toughness using peel test method when film is the ductile material, a new test method, split test, is recommended and analyzed using the GPA model. The prediction is applied to a wedge-loaded experiment for Al-alloy double-cantilever beam in literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two types of peeling experiments are performed in the present research. One is for the Al film/Al2O3 substrate system with an adhesive layer between the film and the substrate. The other one is for the Cu film/Al2O3 substrate system without adhesive layer between the film and the substrate, and the Cu films are electroplated onto the Al2O3 substrates. For the case with adhesive layer, two kinds of adhesives are selected, which are all the mixtures of epoxy and polyimide with mass ratios 1:1.5 and 1:1, respectively. The relationships between energy release rate, the film thickness and the adhesive layer thickness are measured during the steady-state peeling process. The effects of the adhesive layer on the energy release rate are analyzed. Using the experimental results, several analytical criteria for the steady-state peeling based on the bending model and on the two-dimensional finite element analysis model are critically assessed. Through assessment of analytical models, we find that the cohesive zone criterion based on the beam bend model is suitable for a weak interface strength case and it describes a macroscale fracture process zone case, while the two-dimensional finite element model is effective to both the strong interface and weak interface, and it describes a small-scale fracture process zone case. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In current practice the strength evaluation of a bridge system is typically based on firstly using elastic analysis to determine the distribution of load effects in the elements and then checking the ultimate section capacity of those elements. Ductility of the components in most bridge structures permits local yield and subsequent redistribution of the applied loads from the most heavily loaded elements. As a result a bridge can continue to carry additional loading even after one member has yielded, which has conventionally been adopted as the "failure criterion" in bridge strength evaluation. This means that a bridge with inherent redundancy has additional reserves of strength such that the failure of one element does not result in the failure of the complete system. For these bridges warning signs will show up and measures can be undertaken before the ultimate collapse is happening. This paper proposes a rational methodology for calculating the ultimate system strength and including in bridge evaluation the warning level due to redundancy. © 2004 Taylor & Francis Group, London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Specific anti-polysaccharide antibody deficiency (SPAD) is an immune disorder. Diagnostic criteria have not yet been defined clearly. One hundred and seventy-six children evaluated for recurrent respiratory tract infections were analysed retrospectively. For each subject, specific anti-pneumococcal antibodies had been measured with two enzyme-linked immunosorbent assays (ELISAs), one overall assay (OA) using the 23-valent pneumococcal polysaccharide vaccine (23-PPSV) as detecting antigen and the other purified pneumococcal polysaccharide serotypes (serotype-specific assay, SSA) (serotypes 14, 19F and 23F). Antibody levels were measured before (n = 176) and after (n = 93) immunization with the 23-PPSV. Before immunization, low titres were found for 138 of 176 patients (78%) with OA, compared to 20 of 176 patients (11%) with the SSA. We found a significant correlation between OA and SSA results. After immunization, 88% (71 of 81) of the patients considered as responders in the OA test were also responders in the SSA; 93% (71 of 76) of the patients classified as responders according to the SSA were also responders in the OA. SPAD was diagnosed in 8% (seven of 93) of patients on the basis of the absence of response in both tests. Thus, we propose to use OA as a screening test for SPAD before 23-PPSV immunization. After immunization, SSA should be used only in case of a low response in OA. Only the absence of or a very low antibody response detected by both tests should be used as a diagnostic criterion for SPAD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As many as 20-70% of patients undergoing breast conserving surgery require repeat surgeries due to a close or positive surgical margin diagnosed post-operatively [1]. Currently there are no widely accepted tools for intra-operative margin assessment which is a significant unmet clinical need. Our group has developed a first-generation optical visible spectral imaging platform to image the molecular composition of breast tumor margins and has tested it clinically in 48 patients in a previously published study [2]. The goal of this paper is to report on the performance metrics of the system and compare it to clinical criteria for intra-operative tumor margin assessment. The system was found to have an average signal to noise ratio (SNR) >100 and <15% error in the extraction of optical properties indicating that there is sufficient SNR to leverage the differences in optical properties between negative and close/positive margins. The probe had a sensing depth of 0.5-2.2 mm over the wavelength range of 450-600 nm which is consistent with the pathologic criterion for clear margins of 0-2 mm. There was <1% cross-talk between adjacent channels of the multi-channel probe which shows that multiple sites can be measured simultaneously with negligible cross-talk between adjacent sites. Lastly, the system and measurement procedure were found to be reproducible when evaluated with repeated measures, with a low coefficient of variation (<0.11). The only aspect of the system not optimized for intra-operative use was the imaging time. The manuscript includes a discussion of how the speed of the system can be improved to work within the time constraints of an intra-operative setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characterisation of soils for civil engineering purposes depends on removing sufficiently high-quality samples from the ground. Accurate evaluation of sample quality is therefore important if reliable design parameters are to be determined. This paper describes the use of unconfined shear wave velocity (V s) and suction (u r) measurements to assess sample quality rapidly in soft clay. Samples of varying quality from three well-characterised soft clay sites are initially assessed using conventional techniques, and their results compared with V s and u r measurements performed on the same samples. It is observed that the quality of samples indicated by these measurements is very similar to those inferred from traditional disturbance measures, with V s being the more reliable indicator. A tentative empirically derived criterion, based on samples tested in this project, is proposed to quantify sample disturbance combining both V s and u r measurements. Further work using this criterion on different materials is important so as to test its usefulness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building Information Modelling (BIM) is growing in pace, not only in design and construction stages, but also in the analysis of facilities throughout their life cycle. With this continued growth and utilisation of BIM processes, comes the possibility to adopt such procedures, to accurately measure the energy efficiency of buildings, to accurately estimate their energy usage. To this end, the aim of this research is to investigate if the introduction of BIM Energy Performance Assessment in the form of software analysis, provides accurate results, when compared with actual energy consumption recorded. Through selective sampling, three domestic case studies are scrutinised, with baseline figures taken from existing energy providers, the results scrutinised and compared with calculations provided from two separate BIM energy analysis software packages. Of the numerous software packages available, criterion sampling is used to select two of the most prominent platforms available on the market today. The two packages selected for scrutiny are Integrated Environmental Solutions - Virtual Environment (IES-VE) and Green Building Studio (GBS). The results indicate that IES-VE estimated the energy use in region of ±8% in two out of three case studies while GBS estimated usage approximately ±5%. The findings indicate that the introduction of BIM energy performance assessment, using proprietary software analysis, is a viable alternative to manual calculations of building energy use, mainly due to the accuracy and speed of assessing, even the most complex models. Given the surge in accurate and detailed BIM models and the importance placed on the continued monitoring and control of buildings energy use within today’s environmentally conscious society, this provides an alternative means by which to accurately assess a buildings energy usage, in a quick and cost effective manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cascade control is one of the routinely used control strategies in industrial processes because it can dramatically improve the performance of single-loop control, reducing both the maximum deviation and the integral error of the disturbance response. Currently, many control performance assessment methods of cascade control loops are developed based on the assumption that all the disturbances are subject to Gaussian distribution. However, in the practical condition, several disturbance sources occur in the manipulated variable or the upstream exhibits nonlinear behaviors. In this paper, a general and effective index of the performance assessment of the cascade control system subjected to the unknown disturbance distribution is proposed. Like the minimum variance control (MVC) design, the output variances of the primary and the secondary loops are decomposed into a cascade-invariant and a cascade-dependent term, but the estimated ARMA model for the cascade control loop based on the minimum entropy, instead of the minimum mean squares error, is developed for non-Gaussian disturbances. Unlike the MVC index, an innovative control performance index is given based on the information theory and the minimum entropy criterion. The index is informative and in agreement with the expected control knowledge. To elucidate wide applicability and effectiveness of the minimum entropy cascade control index, a simulation problem and a cascade control case of an oil refinery are applied. The comparison with MVC based cascade control is also included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient and robust case sorting algorithm based on Extended Equal Area Criterion (EEAC) is proposed in this paper for power system transient stability assessment (TSA). The time-varying degree of an equivalent image system can be deduced by comparing the analysis results of Static EEAC (SEEAC) and Dynamic EEAC (DEEAC), the former of which neglects all time-varying factors while the latter partially considers the time-varying factors. Case sorting rules according to their transient stability severity are set combining the time-varying degree and fault messages. Then a case sorting algorithm is designed with the “OR” logic among multiple rules, based on which each case can be identified into one of the following five categories, namely stable, suspected stable, marginal, suspected unstable and unstable. The performance of this algorithm is verified by studying 1652 contingency cases from 9 real Chinese provincial power systems under various operating conditions. It is shown that desirable classification accuracy can be achieved for all the contingency cases at the cost of very little extra computational burden and only 9.81% of the whole cases need to carry out further detailed calculation in rigorous on-line TSA conditions.