951 resultados para Categorical landslides
Resumo:
Background. Cardiovascular disease (CVD) exhibits the most striking public health significance due to its high prevalence and mortality as well as huge economic burdens all over the world, especially in industrialized countries. Major risk factors of CVDs have been the targets of population-wide prevention in the United States. Economic evaluations provide structured information in regard to the efficiency of resource utilization which can inform decisions of resource allocation. The main purpose of this review is to investigate the pattern of study design of economic evaluations for interventions of CVDs. ^ Methods. Primary journal articles published during 2003-2008 were systematically retrieved via relevant keywords from Medline, NHS Economic Evaluation Database (NHS EED) and EBSCO Academic Search Complete. Only full economic evaluations for narrowly defined CVD interventions were included for this review. The methodological data of interest were extracted from the eligible articles and reorganized in Microsoft Access database. Chi-square tests in SPSS were used to analyze the associations between pairs of categorical data. ^ Results. One hundred and twenty eligible articles were reviewed after two steps of literature selection with explicit inclusion and exclusion criteria. Descriptive statistics were reported regarding the evaluated interventions, outcome measures, unit costing and cost reports. The chi-square test of the association between prevention level of intervention and category of time horizon showed no statistical significance. The chi-square test showed that sponsor type was significantly associated with whether new or standard intervention being concluded as more cost effective. ^ Conclusions. Tertiary prevention and medication interventions are the major interests for economic evaluators. The majority of the evaluations were claimed from either a provider’s or a payer’s perspective. Almost all evaluations adopted gross costing strategy for unit cost data rather than micro costing. EQ-5D is the most commonly used instrument for subjective outcome measurement. More than half of the evaluations used decision analytic modeling techniques. The lack of consistency in study design standards in published evaluations appears in several aspects. Prevention level of intervention is not likely to be a factor for evaluators to decide whether to design an evaluation in a lifetime horizon or not. Published evaluations sponsored by industry are more likely to conclude that new intervention is more cost effective than standard intervention.^
Resumo:
The three articles that comprise this dissertation describe how small area estimation and geographic information systems (GIS) technologies can be integrated to provide useful information about the number of uninsured and where they are located. Comprehensive data about the numbers and characteristics of the uninsured are typically only available from surveys. Utilization and administrative data are poor proxies from which to develop this information. Those who cannot access services are unlikely to be fully captured, either by health care provider utilization data or by state and local administrative data. In the absence of direct measures, a well-developed estimation of the local uninsured count or rate can prove valuable when assessing the unmet health service needs of this population. However, the fact that these are “estimates” increases the chances that results will be rejected or, at best, treated with suspicion. The visual impact and spatial analysis capabilities afforded by geographic information systems (GIS) technology can strengthen the likelihood of acceptance of area estimates by those most likely to benefit from the information, including health planners and policy makers. ^ The first article describes how uninsured estimates are currently being performed in the Houston metropolitan region. It details the synthetic model used to calculate numbers and percentages of uninsured, and how the resulting estimates are integrated into a GIS. The second article compares the estimation method of the first article with one currently used by the Texas State Data Center to estimate numbers of uninsured for all Texas counties. Estimates are developed for census tracts in Harris County, using both models with the same data sets. The results are statistically compared. The third article describes a new, revised synthetic method that is being tested to provide uninsured estimates at sub-county levels for eight counties in the Houston metropolitan area. It is being designed to replicate the same categorical results provided by a current U.S. Census Bureau estimation method. The estimates calculated by this revised model are compared to the most recent U.S. Census Bureau estimates, using the same areas and population categories. ^
Resumo:
The aims of the study were to determine the prevalence of and factors that affect non-adherence to first line antiretroviral (ARV) medications among HIV infected children and adolescents in Botswana. The study used secondary data from Botswana-Baylor Children's Clinical Center of Excellence for the period of June 2008 to February 10th, 2010. The study design was cross-sectional and case-comparison between non-adherent and adherent participants was used to examine the effects of socio-demographic and medication factors on non-adherence to ARV medications. A case was defined as non-adherent child with adherence level < 95% based on pill count and measurement of liquid formulations. The comparison group consisted of children with adherence levels ≥95%.^ A total of 842 participants met the eligibility criteria for determination of the prevalence of non-adherence and 338 participants (169 cases and 169 individuals) were used in the analysis to estimate the effects of factors on non-adherence. ^ Univariate and multivariable logistic regression were used to estimate the association between non-adherence (outcome) and socio-demographic and medication factors (exposures). The prevalence of non-adherence for participants on first line ARV medications was 20.0% (169/842).^ Increase in age (OR (95% CI): 1.10 (1.04–1.17) p = 0.001) was associated with nonadherence, while increase in number of caregivers (OR (95% CI): 0.72 (0.56–0.93) p = 0.01) and increase in number of monthly visits (OR (95% CI): 0.92 (0.86–0.99) p = 0.02), were associated with good adherence in both the unadjusted and the adjusted models. For the categorical variables, having more than two caregivers (OR (95% CI): 0.66 (0.28–0.84), p = 0.002) was associated with good adherence even in the adjusted model. ^ Conclusion. The prevalence of non-adherence to antiretroviral medicines among the study population was estimated to be 20.0%. In previous studies, adherence levels of ≥ 95% have been associated with better clinical outcomes and suppression of virus to prevent development of resistance. Older age, fewer numbers of caregivers and fewer monthly visits were associated with non-adherence. Strategies to improve and sustain adherence especially among older children are needed. The role of caregivers and social support should be investigated further.^
Resumo:
Can the early identification of the species of staphylococcus responsible for infection by the use of Real Time PCR technology influence the approach to the treatment of these infections? ^ This study was a retrospective cohort study in which two groups of patients were compared. The first group, ‘Physician Aware’ consisted of patients in whom physicians were informed of specific staphylococcal species and antibiotic sensitivity (using RT-PCR) at the time of notification of the gram stain. The second group, ‘Physician Unaware’ consisted of patients in whom treating physicians received the same information 24–72 hours later as a result of blood culture and antibiotic sensitivity determination. ^ The approach to treatment was compared between ‘Physician Aware’ and ‘Physician Unaware’ groups for three different microbiological diagnoses—namely MRSA, MSSA and no-SA (or coagulase negative Staphylococcus). ^ For a diagnosis of MRSA, the mean time interval to the initiation of Vancomycin therapy was 1.08 hours in the ‘Physician Aware’ group as compared to 5.84 hours in the ‘Physician Unaware’ group (p=0.34). ^ For a diagnosis of MSSA, the mean time interval to the initiation of specific anti-MSSA therapy with Nafcillin was 5.18 hours in the ‘Physician Aware’ group as compared to 49.8 hours in the ‘Physician Unaware’ group (p=0.007). Also, for the same diagnosis, the mean duration of empiric therapy in the ‘Physician Aware’ group was 19.68 hours as compared to 80.75 hours in the ‘Physician Unaware’ group (p=0.003) ^ For a diagnosis of no-SA or coagulase negative staphylococcus, the mean duration of empiric therapy was 35.65 hours in the ‘Physician Aware’ group as compared to 44.38 hours in the ‘Physician Unaware’ group (p=0.07). However, when treatment was considered a categorical variable and after exclusion of all cases where anti-MRS therapy was used for unrelated conditions, only 20 of 72 cases in the ‘Physician Aware’ group received treatment as compared to 48 of 106 cases in the ‘Physician Unaware’ group. ^ Conclusions. Earlier diagnosis of MRSA may not alter final treatment outcomes. However, earlier identification may lead to the earlier institution of measures to limit the spread of infection. The early diagnosis of MSSA infection, does lead to treatment with specific antibiotic therapy at an earlier stage of treatment. Also, the duration of empiric therapy is greatly reduced by early diagnosis. The early diagnosis of coagulase negative staphylococcal infection leads to a lower rate of unnecessary treatment for these infections as they are commonly considered contaminants. ^
Resumo:
Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^
Resumo:
The purpose of this study was to analyze the implementation of national family planning policy in the United States, which was embedded in four separate statutes during the period of study, Fiscal Years 1976-81. The design of the study utilized a modification of the Sabatier and Mazmanian framework for policy analysis, which defined implementation as the carrying out of statutory policy. The study was divided into two phases. The first part of the study compared the implementation of family planning policy by each of the pertinent statutes. The second part of the study identified factors that were associated with implementation of federal family planning policy within the context of block grants.^ Implemention was measured here by federal dollars spent for family planning, adjusted for the size of the respective state target populations. Expenditure data were collected from the Alan Guttmacher Institute and from each of the federal agencies having administrative authority for the four pertinent statutes, respectively. Data from the former were used for most of the analysis because they were more complete and more reliable.^ The first phase of the study tested the hypothesis that the coherence of a statute is directly related to effective implementation. Equity in the distribution of funds to the states was used to operationalize effective implementation. To a large extent, the results of the analysis supported the hypothesis. In addition to their theoretical significance, these findings were also significant for policymakers insofar they demonstrated the effectiveness of categorical legislation in implementing desired health policy.^ Given the current and historically intermittent emphasis on more state and less federal decision-making in health and human serives, the second phase of the study focused on state level factors that were associated with expenditures of social service block grant funds for family planning. Using the Sabatier-Mazmanian implementation model as a framework, many factors were tested. Those factors showing the strongest conceptual and statistical relationship to the dependent variable were used to construct a statistical model. Using multivariable regression analysis, this model was applied cross-sectionally to each of the years of the study. The most striking finding here was that the dominant determinants of the state spending varied for each year of the study (Fiscal Years 1976-1981). The significance of these results was that they provided empirical support of current implementation theory, showing that the dominant determinants of implementation vary greatly over time. ^
Resumo:
Genetics education for physicians has been a popular publication topic in the United States and in Europe for over 20 years. Decreasing numbers of medical genetics professionals and an increasing volume of genetic information has created a dire need for increased genetics training in medical school and in clinical practice. This study aimed to assess how well pediatrics-focused primary care physicians apply their general genetics knowledge to clinical genetic testing using scenario-based questions. We chose to specifically focus on knowledge of the diagnostic applicability of Chromosomal Microarray (CMA) technology in pediatrics because of its recent recommendation by the International Standard Cytogenomic Array (ISCA) Consortium as a first-tier genetic test for individuals with developmental disabilities and/or congenital anomalies. Proficiency in ordering baseline genetic testing was evaluated for eighty-one respondents from four pediatrics-focused residencies (categorical pediatrics, pediatric neurology, internal medicine/pediatrics, and family practice) at two large residency programs in Houston, Texas. Similar to other studies, we found an overall deficit of genetic testing knowledge, especially among family practice residents. Interestingly, residents who elected to complete a genetics rotation in medical school scored significantly better than expected, as well as better than residents who did not elect to complete a genetics rotation. We suspect that the insufficient knowledge among physicians regarding a baseline genetics work-up is leading to redundant (i.e. concurrent karyotype and CMA) and incorrect (i.e. ordering CMA to detect achondroplasia) genetic testing and is contributing to rising health care costs in the United States. Our results provide specific teaching points upon which medical schools can focus education about clinical genetic testing and suggest that increased collaboration between primary care physicians and genetics professionals could benefit patient health care overall.
Resumo:
Purpose of the Study: This study evaluated the prevalence of periodontal disease between Mexican American elderly and European American elderly residing in three socio-economically distinct neighborhoods in San Antonio, Texas. ^ Study Group: Subjects for the original protocol were participants of the Oral Health: San Antonio Longitudinal Study of Aging (OH: SALSA), which began with National Institutes of Health (NIH) funding in 1993 (M.J. Saunders, PI). The cohort in the study was the individuals who had been enrolled in Phases I and III of the San Antonio Heart Study (SAHS). This SAHS/SALSA sample is a community-based probability sample of Mexican American and European American residents from three socio-economically distinct San Antonio neighborhoods: low-income barrio, middle-income transitional, and upper-income suburban. The OH: SALSA cohort was established between July 1993 and May 1998 by sampling two subsets of the San Antonio Heart Study (SAHS) cohort. These subsets included the San Antonio Longitudinal Study of Aging (SALSA) cohort, comprised of the oldest members of the SAHS (age 65+ yrs. old), and a younger set of controls (age 35-64 yrs. old) sampled from the remainder of the SAHS cohort. ^ Methods: The study used simple descriptive statistics to describe the sociodemographic characteristics and periodontal disease indicators of the OH: SALSA participants. Means and standard deviations were used to summarize continuous measures. Proportions were used to summarize categorical measures. Simple m x n chi square statistics was used to compare ethnic differences. A multivariable ordered logit regression was used to estimate the prevalence of periodontal disease and test ethnic group and neighborhood differences in the prevalence of periodontal disease. A multivariable model adjustment for socio-economic status (income and education), gender, and age (treated as confounders) was applied. ^ Summary: In the unadjusted and adjusted model, Mexican American elderly demonstrated the greatest prevalence for periodontitis, p < 0.05. Mexican American elderly in barrio neighborhoods demonstrated the greatest prevalence for severe periodontitis, with unadjusted prevalence rates of 31.7%, 22.3%, and 22.4% for Mexican American elderly barrio, transitional, and suburban neighborhoods, respectively. Also, Mexican American elderly had adjusted prevalence rates of 29.4%, 23.7%, and 20.4% for barrio, transitional, and suburban neighborhoods, respectively. ^ Conclusion: This study indicates that the prevalence of periodontal disease is an important oral health issue among the Mexican American elderly. The results suggest that the socioeconomic status of the residential neighborhood increased the risk for severe periodontal disease among the Mexican American elderly when compared to European American elderly. A viable approach to recognizing oral health disparities in our growing population of Mexican American elderly is imperative for the provision of special care programs that will help increase the quality of care in this minority population.^
Resumo:
Renal insufficiency is one of the most common co-morbidities present in heart failure (HF) patients. It has significant impact on mortality and adverse outcomes. Cystatin C has been shown as a promising marker of renal function. A systematic review of all the published studies evaluating the prognostic role of cystatin C in both acute and chronic HF was undertaken. A comprehensive literature search was conducted involving various terms of 'cystatin C' and 'heart failure' in Pubmed medline and Embase libraries using Scopus database. A total of twelve observational studies were selected in this review for detailed assessment. Six studies were performed in acute HF patients and six were performed in chronic HF patients. Cystatin C was used as a continuous variable, as quartiles/tertiles or as a categorical variable in these studies. Different mortality endpoints were reported in these studies. All twelve studies demonstrated a significant association of cystatin C with mortality. This association was found to be independent of other baseline risk factors that are known to impact HF outcomes. In both acute and chronic HF, cystatin C was not only a strong predictor of outcomes but also a better prognostic marker than creatinine and estimated glomerular filtration rate (eGFR). A combination of cystatin C with other biomarkers such as N terminal pro B- type natriuretic peptide (NT-proBNP) or creatinine also improved the risk stratification. The plausible mechanisms are renal dysfunction, inflammation or a direct effect of cystatin C on ventricular remodeling. Either alone or in combination, cystatin C is a better, accurate and a reliable biomarker for HF prognosis. ^
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
BACKGROUND. The development of interferon-gamma release assays (IGRA) has introduced powerful tools in diagnosing latent tuberculosis infection (LTBI) and may play a critical role in the future of tuberculosis diagnosis. However, there have been reports of high indeterminate results in young patient populations (0-18 years). This study investigated results of the QunatiFERON-TB Gold In-Tube (QFT-GIT) IGRA in a population of children (0-18 years) at Texas Children's Hospital in association with specimen collection procedures using surrogate variables. ^ METHODS. A retrospective case-control study design was used for this investigation. Cases were defined as having QFT-GIT indeterminate results. Controls were defined as having either positive or negative results (determinates). Patients' admission status, staff performing specimen collection, and specific nurse performing specimen collection were used as surrogates to measure specimen collection procedures. ^ To minimize potential confounding, abstraction of patients' electronic medical records was performed. Abstracted data included patients' medications and evaluation at the time of QFT-GIT specimen collection in addition to their medical history. QFT-GIT related data was also abstracted. Cases and controls were characterized using chi-squared tests or Fisher's exact tests across categorical variables. Continuous variables were analyzed using one-way ANOVA and t-tests for continuous variables. A multivariate model was constructed by backward stepwise removal of statistically significant variables from univariate analysis. ^ RESULTS. Patient data was abstracted from 182 individuals aged 0-18 years from July 2010 to August 2011 at Texas Children's Hospital. 56 cases (indeterminates) and 126 controls (determinates) were enrolled. Cancer was found to be an effect modifier with subsequent stratification resulting in a cancer patient population too small to analyze (n=13). Subsequent analyses excluded these patients. ^ The exclusion of cancer patients resulted in a population of 169 patients with 49 indeterminates (28.99%) and 120 determinates (71.01%), with mean ages of 9.73 (95% CI: 8.03, 11.43) years and 11.66 (95% CI: 10.75, 12.56) years (p = 0.033), respectively. Median age of patients who were indeterminates and determinates were 12.37 and 12.87 years, respectively. Lack of data for our specific nurse surrogate (QFTNurse) resulted in its exclusion from analysis. The final model included only our remaining surrogate variables (QFTStaff and QFTInpatientOutpatient). The staff collecting surrogate (QFTStaff) was found to be modestly associated with indeterminates when nurses collected the specimen (OR = 1.54, 95% CI: 0.51, 4.64, p = 0.439) in the final model. Inpatients were found to have a strong and statistically significant association with indeterminates (OR = 11.65, 95% CI: 3.89, 34.9, p < 0.001) in the final model. ^ CONCLUSION. Inpatient status was used as a surrogate for indication of nurse drawn blood specimens. Nurses have had little to no training regarding shaking of tubes versus phlebotomists regarding QFT-GIT testing procedures. This was also measured by two other surrogates; specifically a medical note stating whether a nurse or phlebotomist collected the specimen (QFTStaff) and the name and title of the specific nurse if collection was performed by a nurse (QFTNurse). Results indicated that inpatient status was a strong and statistically significant factor for indeterminates, however, nurse collected specimens and indeterminate results had no statistically significant association in non-cancer patients. The lack of data denoting the specific nurse performing specimen collection excluded the QFTNurse surrogate in our analysis. ^ Findings suggests training of staff personnel in specimen procedures may have little effect on the number of indeterminates while inpatient status and thus possibly illness severity may be the most important factor for indeterminate results in this population. The lack of congruence between our surrogate measures may imply that our inpatient surrogate gauged illness severity rather than collection procedures as intended. ^ Despite the lack of clear findings, our analysis indicated that more than half of indeterminates were found in specimens drawn by nurses and as such staff training may be explored. Future studies may explore methods in measuring modifiable variables during pre-analytical QFT-GIT procedures that can be discerned and controlled. Identification of such measures may provide insight into ways to lowering indeterminate QFT-GIT rates in children.^
Resumo:
Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
En el marco de una investigación cualitativa se entrevistó a personal de bodegas de Luján y Maipú (Mendoza, Argentina) para conocer su opinión sobre la contaminación ambiental producida por dichos establecimientos. Se estudiaron cuatro dimensiones de la contaminación: ruidos, olores, residuos sólidos y residuos líquidos. Por muestreo aleatorio se seleccionaron 18 bodegas. En cada una se entrevistó un empleado de nivel gerencial y a un operario de planta utilizando un formulario semiestructurado con preguntas abiertas. La tarea de reducción de la información cualitativa consistió en una categorización y agrupamiento de respuestas. Mediante inducción analítica se generalizó para el área de estudio. Se concluyó que la contaminación ambiental en el lugar de trabajo de las bodegas es percibida por su personal como de mediana magnitud. Con respecto a los residuos líquidos, se consideran de bajo impacto ambiental externo . No se perciben impactos externos causados por las otras dimensiones. El grado de conciencia empresarial es muy variable. En algunas empresas existe conducta favorable al cuidado del medio ambiente, pero esto no es generalizable a todas las bodegas. Hay descuido por parte de los operarios en el uso de protectores para ruidos y olores.
Resumo:
Nuevas cultivares de tomate, de colores distintos al tradicional rojo, se adaptan a la elaboración de productos alternativos, como las confituras. Se estudió la aceptabilidad por parte del consumidor de mermeladas elaboradas con las variedades Victoria FCA, Don Armando FCA y Santa Rosa FCA. Sus frutos: amarillos, anaranjados y rojos, respectivamente, fueron caracterizados por color, peso, acidez: titulable y potencial, y sólidos solubles. Las mermeladas, aromatizadas con clavo de olor, se elaboraron en una planta experimental hasta concentración 67-69 % de sólidos solubles. Un panel de 39 consumidores -clasificados en menores y mayores de 30 años- evaluó aspecto, color, aroma, textura y sabor, aplicando escalas no estructuradas. Las evaluaciones de ambos grupos fueron distintas. Para todas las características sensoriales la prueba de Friedman indicó diferencias entre los tres productos (a = 0,001). En una escala para cinco categorías, más del 50 % de los jueces consideraron las tres mermeladas en las categorías más altas: me gusta y me gusta mucho. El análisis de los datos categóricos de preferencia otorgó el primer lugar a la variedad roja, seguida por la anaranjada y la amarilla. Podría existir un segmento de consumidores interesados en el desarrollo de confituras de tomate amarillo, pero en el caso específico de la mermelada, tuvo mayor aceptabilidad el producto de color igual o parecido al tradicional.