936 resultados para Reliability in automation
Resumo:
Decline in gait stability has been associated with increased fall risk in older adults. Reliable and clinically feasible methods of gait instability assessment are needed. This study evaluated the relative and absolute reliability and concurrent validity of the testing procedure of the clinical version of the Narrow Path Walking Test (NPWT) under single task (ST) and dual task (DT) conditions. Thirty independent community-dwelling older adults (65-87 years) were tested twice. Participants were instructed to walk within the 6-m narrow path without stepping out. Trial time, number of steps, trial velocity, number of step errors, and number of cognitive task errors were determined. Intraclass correlation coefficients (ICCs) were calculated as indices of agreement, and a graphic approach called "mountain plot" was applied to help interpret the direction and magnitude of disagreements between testing procedures. Smallest detectable change and smallest real difference (SRD) were computed to determine clinically relevant improvement at group and individual levels, respectively. Concurrent validity was assessed using Performance Oriented Mobility Assessment Tool (POMA) and the Short Physical Performance Battery (SPPB). Test-retest agreement (ICC1,2) varied from 0.77 to 0.92 in ST and from 0.78 to 0.92 in DT conditions, with no apparent systematic differences between testing procedures demonstrated by the mountain plot graphs. Smallest detectable change and smallest real change were small for motor task performance and larger for cognitive errors. Significant correlations were observed for trial velocity and trial time with POMA and SPPB. The present results indicate that the NPWT testing procedure is highly reliable and reproducible.
Resumo:
The objective of this study was to find out the relationship between students’ perception of Social Studies and their academic performance in the subject in Colleges of Education in Kaduna State. The respondents of the study comprised NCE 2 and NCE 3 students of Social Studies at the Federal College of Education in Zaria and the Kaduna State College of Education in Gidan Waya. The data for the study was collected using a questionnaire with reliability coefficient of 0.87. The Pearson’s Product Moment Correlation Coefficient was used to test the hypotheses formulated for the study. The hypotheses were tested at 0.05 level of significance with df = 232. The findings were as follows: 1. The Social Studies students in colleges of education in Kaduna State do not record high academic performance in the subject. 2. Students’ perception of the Social Studies curriculum does affect their academic performance in the subject 3. Students’ perception of relevance of Social Studies education has no bearing on their academic performance in the subject. 4. Students’ perception of public attitude towards Social Studies has no impact on their academic performance in the subject. 5. Students’ general perception of Social Studies does not affect their academic performance in the subject. Based on these findings the following recommendations were proffered for Social Studies researchers and policy makers. a. There is need for further research in order to determine the actual cause of students’ failure to display high performance in Social Studies. b. The NCE Social Studies curriculum should be reviewed in terms of volume and difficulty.
Resumo:
German accounting rules value assets and liabilities asymmetricallyand thus lead to grossly distorted balance sheets. In the interwardebate on a reform of disclosure regulation, financial expertsconsidered the (undisclosed) tax balance sheet, which had to bedrawn up separately for the corporate tax assessment, as a paradigmfor adequate financial disclosure. However, due to tax secrecy thaywere barred from analyzing tax documents. Using archival evidence,we analyze tax balance sheets from which the reliability of disclosedbalance sheets of the interwar period can be assessed. It emergesthat companies overstated their profits in the middand late 1920s,but grossly understated them in the Nazi economy.
Resumo:
We have analyzed the spatial accuracy of European foreign trade statistics compared to Latin American. We have also included USA s data because of the importance of this country in Latin American trade. We have developed a method for mapping discrepancies between exporters and importers, trying to isolate systematic spatial deviations. Although our results don t allow a unique explanation, they present some interesting clues to the distribution channels in the Latin American Continent as well as some spatial deviations for statistics in individual countries. Connecting our results with the literature specialized in the accuracy of foreign trade statistics; we can revisit Morgernstern (1963) as well as Federico and Tena (1991). Morgernstern had had a really pessimistic view on the reliability of this statistic source, but his main alert was focused on the trade balances, not in gross export or import values. Federico and Tena (1991) have demonstrated howaccuracy increases by aggregation, geographical and of product at the same time. But they still have a pessimistic view with relation to distribution questions, remarking that perhaps it will be more accurate to use import sources in this latest case. We have stated that the data set coming from foreign trade statistics for a sample in 1925, being it exporters or importers, it s a valuable tool for geography of trade patterns, although in some specific cases it needs some spatial adjustments.
Resumo:
The results of the examinations taken by graduated high school studentswho want to enrol at a Catalan university are here studied. To do so,the authors address several issues related to the equity of the system:reliability of grading, difficulty and discrimination power of the exams.The general emphasis is put upon the concurrent research and empiricalevidence about the properties of the examination items and scores. Aftera discussion about the limitations of the exams' format and appropriatenessof the instruments used in the study, the article concludes with somesuggestions to improve such examinations.
Resumo:
Evaluating leaf litter beetle data sampled by Winkler extraction from Atlantic forest sites in southern Brazil. To evaluate the reliability of data obtained by Winkler extraction in Atlantic forest sites in southern Brazil, we studied litter beetle assemblages in secondary forests (5 to 55 years after abandonment) and old-growth forests at two seasonally different points in time. For all regeneration stages, species density and abundance were lower in April compared to August; but, assemblage composition of the corresponding forest stages was similar in both months. We suggest that sampling of small litter inhabiting beetles at different points in time using the Winkler technique reveals identical ecological patterns, which are more likely to be influenced by sample incompleteness than by differences in their assemblage composition. A strong relationship between litter quantity and beetle occurrences indicates the importance of this variable for the temporal species density pattern. Additionally, the sampled beetle material was compared with beetle data obtained with pitfall traps in one old-growth forest. Over 60% of the focal species captured with pitfall traps were also sampled by Winkler extraction in different forest stages. Few beetles with a body size too large to be sampled by Winkler extraction were only sampled with pitfall traps. This indicates that the local litter beetle fauna is dominated by small species. Hence, being aware of the exclusion of large beetles and beetle species occurring during the wet season, the Winkler method reveals a reliable picture of the local leaf litter beetle community.
Resumo:
The examinations taken by high-school graduates in Spain and the role ofthe examination in the university admissions process are described. Thefollowing issues arising in the assessment of the process are discussed:reliability of grading, comparability of the grades and scores(equating),maintenance of standards, and compilation and use of the grading process,and their integration in the operational grading are proposed. Variousschemes for score adjustment are reviewed and feasibility of theirimplementation discussed. The advantages of pretesting of items and ofempirical checks of experts' judgements are pointed out. The paperconcludes with an outline of a planned reorganisation of the highereducation in Spain, and with a call for a comprehensive programme ofempirical research concurrent with the operation of the examination andscoring system.
Resumo:
Purpose: Many countries used the PGMI (P=perfect, G=good, M=moderate, I=inadequate) classification system for assessing the quality of mammograms. Limits inherent to the subjectivity of this classification have been shown. Prior to introducing this system in Switzerland, we wanted to better understand the origin of this subjectivity in order to minimize it. Our study aimed at identifying the main determinants of the variability of the PGMI system and which criteria are the most subjected to subjectivity. Methods and Materials: A focus group composed of 2 experienced radiographers and 2 radiologists specified each PGMI criterion. Ten raters (6 radiographers and 4 radiologists) evaluated twice a panel of 40 randomly selected mammograms (20 analogic and 20 digital) according to these specified PGMI criteria. The PGMI classification was assessed and the intra- and inter-rater reliability was tested for each professional group (radiographer vs radiologist), image technology (analogic vs digital) and PGMI criterion. Results: Some 3,200 images were assessed. The intra-rater reliability appears to be weak, particularly in respect to inter-rater variability. Subjectivity appears to be largely independent of the professional group and image technology. Aspects of the PGMI classification criteria most subjected to variability were identified. Conclusion: Post-test discussions enabled to specify more precisely some criteria. This should reduce subjectivity when applying the PGMI classification system. A concomitant, important effort in training radiographers is also necessary.
Resumo:
BACKGROUND AND OBJECTIVES: Recombinant human growth hormone (rhGH) has been on the list of forbidden substances since availability of its recombinant form improved in the early 1990s. Although its effectiveness in enhancing physical performance is still unproved, the compound is likely used for its potential anabolic effect on the muscle growth, and also in combination with other products (androgens, erythropoietin, etc.). The degree of similarity between the endogenous and the recombinant forms, the pulsatile secretion and marked interindividual variability makes detection of doping difficult. Two approaches proposed to overcome this problem are: the indirect method, which measures a combination of several factors in the biological cascade affected by administration of GH; and the direct method, which measures the difference between the circulating and the recombinant (represented by the unique 22 kD molecule) forms of GH. This article gives an overview of what is presently known about hGH in relation to sport. The available methods of detection are also evaluated. METHODS: Review of the literature on GH in relation to exercise, and its adverse effects and methods of detection when used for doping. RESULTS AND CONCLUSION: The main effects of exercise on hGH production and the use and effects of rhGH in athletes are discussed. Difficulties encountered by laboratories to prove misuse of this substance by both indirect and direct analyses are emphasised. The direct method currently seems to have the best reliability, even though the time window of detection is too short. hGH doping is a major challenge in the fight against doping. The effect of exercise on hGH and its short half-life are still presenting difficulties during doping analysis. To date the most promising method appears to be the direct approach utilising immunoassays.
Resumo:
The objective of this study was the identification of the attributes and dimensions of service quality affecting the service performance of the five stars resort hotels located in the Cape Verde Islands. The reason boosting the initiative to do this research was the paramount role of the resort hotels in the development of the travel and tourism sector in Cape Verde, and the impact that today this sector has had in the economy of that country. The research opens with a literature review on the service quality theory in the hotel industry, starting from the middle of the 1980s with the classic model of service quality and SERVQUAL instrument to the analysis of recent models of service quality measurement in the hotel industry, as it is an example the scale of items developed in 2003 in the Lodging Quality Index (LQI). Furthermore, the study elaborates an analysis on the importance of the travel and tourism activities in the Cape Verde Islands, and it evidences the enormous importance of those activities in the performance of the Cape Verdean hotel industry. In sequence the study analyzes in details the hotel industry of Cape Verde and it identifies the market size of the five stars resort hotels and their current operators in that market. Moreover, the research develops with an online questionnaire elaborated and sent through the platforms of travel websites and communities to the guests whom have experienced the service of the five stars resort hotels located in the Cape Verde Islands. The scope of the questionnaire was to assess the attributes and dimensions of service quality in the five stars resort hotels of Cape Verde. The results of the questionnaire were in sequence analyzed through descriptive and applied statistics, using Microsoft Excel and the Statistical Package for Social Science (SPSS). Content validity analysis, factor analysis, and reliability analysis of the factors were made to purify an initial scale of 47 items of service quality. An instrument with three dimensions covering twenty four attributes of service quality assessment in the five stars resort hotels of Cape Verde was finally created. The three dimensions found were: staff competence; food and entertainment; and physical facilities. This study on the service in the five stars resort hotels of Cape Verde ends with brief comments on the status of service quality according to the identified dimensions and their attributes. In the conclusion, the study summarizes the whole work and gives some directions for future research.
Resumo:
OBJECTIVE: The aim of this study was to evaluate a French language version of the Adolescent Drug Abuse Diagnosis (ADAD) instrument in a Swiss sample of adolescent illicit drug and/or alcohol users. PARTICIPANTS AND SETTING: The participants in the study were 102 French-speaking adolescents aged 13-19 years who fitted the criteria of illicit drug or alcohol use (at least one substance--except tobacco--once a week during the last 3 months). They were recruited in hospitals, institutions and leisure places. Procedure. The ADAD was administered individually by trained psychologists. It was integrated into a broader protocol including alcohol and drug abuse DSM-IV diagnoses, the BDI-13 (Beck Depression Inventory), life events and treatment trajectories. RESULTS: The ADAD appears to show good inter-rater reliability; the subscales showed good internal coherence and the correlations between the composite scores and the severity ratings were moderate to high. Finally, the results confirmed good concurrent validity for three out of eight ADAD dimensions. CONCLUSIONS: The French language version of the ADAD appears to be an adequate instrument for assessing drug use and associated problems in adolescents. Despite its complexity, the instrument has acceptable validity, reliability and usefulness criteria, enabling international and transcultural comparisons.
Resumo:
Reducing a test administration to standardised procedures reflects the test designers' standpoint. However, from the practitioners' standpoint, each client is unique. How do psychologists deal with both standardised test administration and clients' diversity? To answer this question, we interviewed 17 psychologists working in three public services for children and adolescents about their assessment practices. We analysed the numerous "client categorisations" they produced in their accounts. We found that they had shared perceptions about their clients' diversity, and reported various non-standard practices that complemented standardised test administration, but also differed from them or were even forbidden. They seem to experience a dilemma between: (a) prescribed and situated practices; (b) scientific and situated reliability; (c) commutative and distributive justice. For practitioners, dealing with clients' diversity this is a practical problem, halfway between a problem-solving task and a moral dilemma.
Resumo:
This PhD thesis addresses the issue of scalable media streaming in large-scale networking environments. Multimedia streaming is one of the largest sink of network resources and this trend is still growing as testified by the success of services like Skype, Netflix, Spotify and Popcorn Time (BitTorrent-based). In traditional client-server solutions, when the number of consumers increases, the server becomes the bottleneck. To overcome this problem, the Content-Delivery Network (CDN) model was invented. In CDN model, the server copies the media content to some CDN servers, which are located in different strategic locations on the network. However, they require heavy infrastructure investment around the world, which is too expensive. Peer-to-peer (P2P) solutions are another way to achieve the same result. These solutions are naturally scalable, since each peer can act as both a receiver and a forwarder. Most of the proposed streaming solutions in P2P networks focus on routing scenarios to achieve scalability. However, these solutions cannot work properly in video-on-demand (VoD) streaming, when resources of the media server are not sufficient. Replication is a solution that can be used in these situations. This thesis specifically provides a family of replication-based media streaming protocols, which are scalable, efficient and reliable in P2P networks. First, it provides SCALESTREAM, a replication-based streaming protocol that adaptively replicates media content in different peers to increase the number of consumers that can be served in parallel. The adaptiveness aspect of this solution relies on the fact that it takes into account different constraints like bandwidth capacity of peers to decide when to add or remove replicas. SCALESTREAM routes media blocks to consumers over a tree topology, assuming a reliable network composed of homogenous peers in terms of bandwidth. Second, this thesis proposes RESTREAM, an extended version of SCALESTREAM that addresses the issues raised by unreliable networks composed of heterogeneous peers. Third, this thesis proposes EAGLEMACAW, a multiple-tree replication streaming protocol in which two distinct trees, named EAGLETREE and MACAWTREE, are built in a decentralized manner on top of an underlying mesh network. These two trees collaborate to serve consumers in an efficient and reliable manner. The EAGLETREE is in charge of improving efficiency, while the MACAWTREE guarantees reliability. Finally, this thesis provides TURBOSTREAM, a hybrid replication-based streaming protocol in which a tree overlay is built on top of a mesh overlay network. Both these overlays cover all peers of the system and collaborate to improve efficiency and low-latency in streaming media to consumers. This protocol is implemented and tested in a real networking environment using PlanetLab Europe testbed composed of peers distributed in different places in Europe.
Resumo:
Aim The imperfect detection of species may lead to erroneous conclusions about species-environment relationships. Accuracy in species detection usually requires temporal replication at sampling sites, a time-consuming and costly monitoring scheme. Here, we applied a lower-cost alternative based on a double-sampling approach to incorporate the reliability of species detection into regression-based species distribution modelling.Location Doñana National Park (south-western Spain).Methods Using species-specific monthly detection probabilities, we estimated the detection reliability as the probability of having detected the species given the species-specific survey time. Such reliability estimates were used to account explicitly for data uncertainty by weighting each absence. We illustrated how this novel framework can be used to evaluate four competing hypotheses as to what constitutes primary environmental control of amphibian distribution: breeding habitat, aestivating habitat, spatial distribution of surrounding habitats and/or major ecosystems zonation. The study was conducted on six pond-breeding amphibian species during a 4-year period.Results Non-detections should not be considered equivalent to real absences, as their reliability varied considerably. The occurrence of Hyla meridionalis and Triturus pygmaeus was related to a particular major ecosystem of the study area, where suitable habitat for these species seemed to be widely available. Characteristics of the breeding habitat (area and hydroperiod) were of high importance for the occurrence of Pelobates cultripes and Pleurodeles waltl. Terrestrial characteristics were the most important predictors of the occurrence of Discoglossus galganoi and Lissotriton boscai, along with spatial distribution of breeding habitats for the last species.Main conclusions We did not find a single best supported hypothesis valid for all species, which stresses the importance of multiscale and multifactor approaches. More importantly, this study shows that estimating the reliability of non-detection records, an exercise that had been previously seen as a naïve goal in species distribution modelling, is feasible and could be promoted in future studies, at least in comparable systems.
Resumo:
BACKGROUND: Hyperoxaluria is a major risk factor for kidney stone formation. Although urinary oxalate measurement is part of all basic stone risk assessment, there is no standardized method for this measurement. METHODS: Urine samples from 24-h urine collection covering a broad range of oxalate concentrations were aliquoted and sent, in duplicates, to six blinded international laboratories for oxalate, sodium and creatinine measurement. In a second set of experiments, ten pairs of native urine and urine spiked with 10 mg/L of oxalate were sent for oxalate measurement. Three laboratories used a commercially available oxalate oxidase kit, two laboratories used a high-performance liquid chromatography (HPLC)-based method and one laboratory used both methods. RESULTS: Intra-laboratory reliability for oxalate measurement expressed as intraclass correlation coefficient (ICC) varied between 0.808 [95% confidence interval (CI): 0.427-0.948] and 0.998 (95% CI: 0.994-1.000), with lower values for HPLC-based methods. Acidification of urine samples prior to analysis led to significantly higher oxalate concentrations. ICC for inter-laboratory reliability varied between 0.745 (95% CI: 0.468-0.890) and 0.986 (95% CI: 0.967-0.995). Recovery of the 10 mg/L oxalate-spiked samples varied between 8.7 ± 2.3 and 10.7 ± 0.5 mg/L. Overall, HPLC-based methods showed more variability compared to the oxalate oxidase kit-based methods. CONCLUSIONS: Significant variability was noted in the quantification of urinary oxalate concentration by different laboratories, which may partially explain the differences of hyperoxaluria prevalence reported in the literature. Our data stress the need for a standardization of the method of oxalate measurement.