914 resultados para Operational indicators
Resumo:
Within-subject standardization (ipsatization) has been advocated as a possible means to control for culture-specific responding (e.g., Fisher, 2004). However, the consequences of different kinds of ipsatization procedures for the interpretation of mean differences remain unclear. The current study compared several ipsatization procedures with ANCOVA-style procedures using response style indicators for the construct of family orientation with data from 14 cultures and two generations from the Value-of-Children-(VOC)-Study (4135 dyads). Results showed that within-subject centering/standardizing across all Likert-scale items of the comprehensive VOC-questionnaire removed most of the original cross-cultural variation in family orientation and lead to a non-interpretable pattern of means in both generations. Within-subject centering/standardizing using a subset of 19 unrelated items lead to a decrease to about half of the original effect size and produced a theoretically meaningful pattern of means. A similar effect size and similar mean differences were obtained when using a measure of acquiescent responding based on the same set of items in an ANCOVA-style analysis. Additional models controlling for extremity and modesty performed worse, and combinations did not differ from the acquiescence-only model. The usefulness of different approaches to control for uniform response styles (scalar equivalence not given) in cross- cultural comparisons is discussed.
Resumo:
Energy consumption in industrialized countries by far exceeds a sustainable level. Previous research on determinants of overall consumption levels has yielded contradictory results as to what the main drivers are. While research on the relationship of environmental concerns and pro-environmental behavior emphasizes the importance of motivational aspects, more impact-oriented research challenges these findings and underlines the impacts of a person’s social standing. The aim of our research was to determine which amount of per-capita energy consumption can be explained by structural, socio-demographic, and pro-environmentally motivational variables. Data come from standardized interviews with a representative sample (N=1014) in Germany. Different indicators of per-capita use were collected and will provide the basis for calculating the overall consumption level. In addition, person variables, lifestyle milieus, self-reported energy use, and motivational variables were assessed. First regression analyses show various patterns of determinants for different indicators of overall energy use. While variance in self-reported use is mainly explained by environmental concern, more impact-oriented indicators, such as the size of personal living space and distances of vacation trips, predominantly correlate with status-relevant predictors. These preliminary results support the suspicion that although environmentally aware people intend to reduce their energy use, they rarely go beyond low-impact actions.
Resumo:
BACKGROUND Kidney recipients maintaining a prolonged allograft survival in the absence of immunosuppressive drugs and without evidence of rejection are supposed to be exceptional. The ERA-EDTA-DESCARTES working group together with Nantes University launched a European-wide survey to identify new patients, describe them and estimate their frequency for the first time. METHODS Seventeen coordinators distributed a questionnaire in 256 transplant centres and 28 countries in order to report as many 'operationally tolerant' patients (TOL; defined as having a serum creatinine <1.7 mg/dL and proteinuria <1 g/day or g/g creatinine despite at least 1 year without any immunosuppressive drug) and 'almost tolerant' patients (minimally immunosuppressed patients (MIS) receiving low-dose steroids) as possible. We reported their number and the total number of kidney transplants performed at each centre to calculate their frequency. RESULTS One hundred and forty-seven questionnaires were returned and we identified 66 TOL (61 with complete data) and 34 MIS patients. Of the 61 TOL patients, 26 were previously described by the Nantes group and 35 new patients are presented here. Most of them were noncompliant patients. At data collection, 31/35 patients were alive and 22/31 still TOL. For the remaining 9/31, 2 were restarted on immunosuppressive drugs and 7 had rising creatinine of whom 3 resumed dialysis. Considering all patients, 10-year death-censored graft survival post-immunosuppression weaning reached 85% in TOL patients and 100% in MIS patients. With 218 913 kidney recipients surveyed, cumulative incidences of operational tolerance and almost tolerance were estimated at 3 and 1.5 per 10 000 kidney recipients, respectively. CONCLUSIONS In kidney transplantation, operational tolerance and almost tolerance are infrequent findings associated with excellent long-term death-censored graft survival.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
Pollen and plant-macrofossil data are presented for two lakes near the timberline in the Italian (Lago Basso, 2250 m) and Swiss Central Alps (Gouille Rion, 2343 m). The reforestation at both sites started at 9700-9500 BP with Pinus cembra, Larbc decidua, and Betula. The timberline reached its highest elevation between 8700 and 5000 BP and retreated after 5000 BP, due to a mid-Holocene climatic change and increasing human impact since about 3500 BP (Bronze Age). The expansion of Picea abies at Lago Basso between ca. 7500 and 6200 BP was probably favored by cold phases accompanied by increased oceanicity, whereas in the area of Gouille Rion, where spruce expanded rather late (between 4500 and 3500 BP), human influence equally might have been important. The mass expansion of Alnus viridis between ca. 5000 and 3500 BP probably can be related to both climatic change and human activity at timberline. During the early and middle Holocene a series of timberline fluctuations is recorded as declines in pollen and macrofossil concentrations of the major tree species, and as increases in nonarboreal pollen in the pollen percentage diagram of Gouille Rion. Most of ·the periods of low timberline can be correlated by radiocarbon dating with climatic changes in the Alps as indicated by glacier ad vances in combination with palynological records, solifluction, and dendrocli matical data. Lago Basso and Gouille Rion are the only sites in the Alps showing complete palaeobotanical records of cold phases between 10,000 and 2000 BP with very good time control. The altitudinal range of the Holocene treeline fluc tuations caused by climate most likely was not more than 100 to 150 m. A possible correlation of a cold period at ca. 7500-6500 BP (Misox oscil lation) in the Alps is made with paleoecological data from North America and Scandinavia and a climatic signal in the GRIP ice core from central Greenland 8200 yr ago (ca. 7400 yr uncal. BP).
Resumo:
Soil indicators may be used for assessing both land suitability for restoration and the effectiveness of restoration strategies in restoring ecosystem functioning and services. In this review paper, several soil indicators, which can be used to assess the effectiveness of ecological restoration strategies in dryland ecosystems at different spatial and temporal scales, are discussed. The selected indicators represent the different viewpoints of pedology, ecology, hydrology, and land management. Two overall outcomes stem from the review. (i) The success of restoration projects relies on a proper understanding of their ecology, namely the relationships between soil, plants, hydrology, climate, and land management at different scales, which are particularly complex due to the heterogeneous pattern of ecosystems functioning in drylands. (ii) The selection of the most suitable soil indicators follows a clear identification of the different and sometimes competing ecosystem services that the project is aimed at restoring.
Resumo:
Surface sediments from 68 small lakes in the Alps and 9 well-dated sediment core samples that cover a gradient of total phosphorus (TP) concentrations of 6 to 520 μg TP l-1 were studied for diatom, chrysophyte cyst, cladocera, and chironomid assemblages. Inference models for mean circulation log10 TP were developed for diatoms, chironomids, and benthic cladocera using weighted-averaging partial least squares. After screening for outliers, the final transfer functions have coefficients of determination (r2, as assessed by cross-validation, of 0.79 (diatoms), 0.68 (chironomids), and 0.49 (benthic cladocera). Planktonic cladocera and chrysophytes show very weak relationships to TP and no TP inference models were developed for these biota. Diatoms showed the best relationship with TP, whereas the other biota all have large secondary gradients, suggesting that variables other than TP have a strong influence on their composition and abundance. Comparison with other diatom – TP inference models shows that our model has high predictive power and a low root mean squared error of prediction, as assessed by cross-validation.
Resumo:
This paper uses Bayesian vector autoregressive models to examine the usefulness of leading indicators in predicting US home sales. The benchmark Bayesian model includes home sales, the price of homes, the mortgage rate, real personal disposable income, and the unemployment rate. We evaluate the forecasting performance of six alternative leading indicators by adding each, in turn, to the benchmark model. Out-of-sample forecast performance over three periods shows that the model that includes building permits authorized consistently produces the most accurate forecasts. Thus, the intention to build in the future provides good information with which to predict home sales. Another finding suggests that leading indicators with longer leads outperform the short-leading indicators.
Resumo:
Introduction. Selectively manned units have a long, international history, both military and civilian. Some examples include SWAT teams, firefighters, the FBI, the DEA, the CIA, and military Special Operations. These special duty operators are individuals who perform a highly skilled and dangerous job in a unique environment. A significant amount of money is spent by the Department of Defense (DoD) and other federal agencies to recruit, select, train, equip and support these operators. When a critical incident or significant life event occurs, that jeopardizes an operator's performance; there can be heavy losses in terms of training, time, money, and potentially, lives. In order to limit the number of critical incidents, selection processes have been developed over time to “select out” those individuals most likely to perform below desired performance standards under pressure or stress and to "select in" those with the "right stuff". This study is part of a larger program evaluation to assess markers that identify whether a person will fail under the stresses in a selectively manned unit. The primary question of the study is whether there are indicators in the selection process that signify potential negative performance at a later date. ^ Methods. The population being studied included applicants to a selectively manned DoD organization between 1993 and 2001 as part of a unit assessment and selection process (A&S). Approximately 1900 A&S records were included in the analysis. Over this nine year period, seventy-two individuals were determined to have had a critical incident. A critical incident can come in the form of problems with the law, personal, behavioral or family problems, integrity issues, and skills deficit. Of the seventy-two individuals, fifty-four of these had full assessment data and subsequent supervisor performance ratings which assessed how an individual performed while on the job. This group was compared across a variety of variables including demographics and psychometric testing with a group of 178 individuals who did not have a critical incident and had been determined to be good performers with positive ratings by their supervisors.^ Results. In approximately 2004, an online pre-screen survey was developed in the hopes of preselecting out those individuals with items that would potentially make them ineligible for selection to this organization. This survey has aided the organization to increase its selection rates and save resources in the process. (Patterson, Howard Smith, & Fisher, Unit Assessment and Selection Project, 2008) When the same prescreen was used on the critical incident individuals, it was found that over 60% of the individuals would have been flagged as unacceptable. This would have saved the organization valuable resources and heartache.^ There were some subtle demographic differences between the two groups (i.e. those with critical incidents were almost twice as likely to be divorced compared with the positive performers). Upon comparison of Psychometric testing several items were noted to be different. The two groups were similar when their IQ levels were compared using the Multidimensional Aptitude Battery (MAB). When looking at the Minnesota Multiphasic Personality Inventory (MMPI), there appeared to be a difference on the MMPI Social Introversion; the Critical Incidence group scored somewhat higher. When analysis was done, the number of MMPI Critical Items between the two groups was similar as well. When scores on the NEO Personality Inventory (NEO) were compared, the critical incident individuals tended to score higher on Openness and on its subscales (Ideas, Actions, and Feelings). There was a positive correlation between Total Neuroticism T Score and number of MMPI critical items.^ Conclusions. This study shows that the current pre-screening process is working and would have saved the organization significant resources. ^ If one was to develop a profile of a candidate who potentially could suffer a critical incident and subsequently jeopardize the unit, mission and the safety of the public they would look like the following: either divorced or never married, score high on the MMPI in Social Introversion, score low on MMPI with an "excessive" amount of MMPI critical items; and finally scores high on the NEO Openness and subscales Ideas, Feelings, and Actions.^ Based on the results gleaned from the analysis in this study there seems to be several factors, within psychometric testing, that when taken together, will aid the evaluators in selecting only the highest quality operators in order to save resources and to help protect the public from unfortunate critical incidents which may adversely affect our health and safety.^
Resumo:
This study provides a review of the current alcoholism planning process of the Houston-Galveston planning process of the Houston-Galveston Area Council, an agency carrying out planning for a thirteen county region in surrounding Houston, Texas. The four central groups involved in this planning are identified, and the role that each plays and how it effects the planning outcomes is discussed.^ The most substantive outcome of the Houston-Galveston Area Council's alcoholism planning, the Regional Alcoholism/Alcohol Abuse Plan is examined. Many of the shortcomings in the data provided, and the lack of other data necessary for planning are offered.^ A problem oriented planning model is presented as an alternative to the Houston-Galveston Area Council's current service oriented approach to alcoholism planning. Five primary phases of the model, identification of the problem, statement of objectives, selection of alternative programs, implementation, and evaluation, are presented, and an overview of the tasks involved in the application of this model to alcoholism planning is offered.^ A specific aspect of the model, the use of problem status indicators is explored using cirrhosis and suicide mortality data. A review of the literature suggests that based on five criteria, availability, subgroup identification, validity, reliability, and sensitivity, both suicide and cirrhosis are suitable as indicators of the alcohol problem when combined with other indicators.^ Cirrhosis and suicide mortality data are examined for the thirteen county Houston-Galveston Region for the years 1969 through 1976. Data limitations preclude definite conclusions concerning the alcohol problem in the region. Three hypotheses about the nature of the regional alcohol problem are presented. First, there appears to be no linear trend in the number of alcoholics that are at risk of suicide and cirrhosis mortality. Second, the number of alcoholics in the metropolitan areas seems to be greater than the number of rural areas. Third, the number of male alcoholics at risk of cirrhosis and suicide mortality is greater than the number of female alcoholics.^
Resumo:
Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^