888 resultados para power to extend time
Resumo:
BACKGROUND Preterm infants having immature lungs often require respiratory support, potentially leading to bronchopulmonary dysplasia (BPD). Conventional BPD rodent models based on mechanical ventilation (MV) present outcome measured at the end of the ventilation period. A reversible intubation and ventilation model in newborn rats recently allowed discovering that different sets of genes modified their expression related to time after MV. In a newborn rat model, the expression profile 48 h after MV was analyzed with gene arrays to detect potentially interesting candidates with an impact on BPD development. METHODS Rat pups were injected P4-5 with 2 mg/kg lipopolysaccharide (LPS). One day later, MV with 21 or 60% oxygen was applied during 6 h. Animals were sacrified 48 h after end of ventilation. Affymetrix gene arrays assessed the total gene expression profile in lung tissue. RESULTS In fully treated animals (LPS + MV + 60% O(2)) vs. controls, 271 genes changed expression significantly. All modified genes could be classified in six pathways: tissue remodeling/wound repair, immune system and inflammatory response, hematopoiesis, vasodilatation, and oxidative stress. Major alterations were found in the MMP and complement system. CONCLUSION MMPs and complement factors play a central role in several of the pathways identified and may represent interesting targets for BPD treatment/prevention.Bronchopulmonary dysplasia (BPD) is a chronic lung disease occurring in ~30% of preterm infants born less than 30 wk of gestation (1). Its main risk factors include lung immaturity due to preterm delivery, mechanical ventilation (MV), oxygen toxicity, chorioamnionitis, and sepsis. The main feature is an arrest of alveolar and capillary formation (2). Models trying to decipher genes involved in the pathophysiology of BPD are mainly based on MV and oxygen application to young mammals with immature lungs of different species (3). In newborn rodent models, analyses of lung structure and gene and protein expression are performed for practical reasons directly at the end of MV (4,5,6). However, later appearing changes of gene expression might also have an impact on lung development and the evolution towards BPD and cannot be discovered by such models. Recently, we developed a newborn rat model of MV using an atraumatic (orotracheal) intubation technique that allows the weaning of the newborn animal off anesthesia and MV, the extubation to spontaneous breathing, and therefore allows the evaluation of effects of MV after a ventilation-free period of recovery (7). Indeed, applying this concept of atraumatic intubation by direct laryngoscopy, we recently were able to show significant differences between gene expression changes appearing directly after MV compared to those measured after a ventilation-free interval of 48 h. Immediately after MV, inflammation-related genes showed a transitory modified expression, while another set of more structurally related genes changed their expression only after a delay of 2 d (7). Lung structure, analyzed by conventional 2D histology and also by 3D reconstruction using synchrotron x-ray tomographic microscopy revealed, 48 h after end of MV, a reduced complexity of lung architecture compared to the nonventilated rat lungs, similar to the typical findings in BPD. To extend these observations about late gene expression modifications, we performed with a similar model a full gene expression profile of lung tissue 48 h after the end of MV with either room air or 60% oxygen. Essentially, we measured changes in the expression of genes related to the MMPs and complement system which played a role in many of the six identified mostly affected pathways.
Resumo:
The concentration ratios of strontium to calcium in laboratory-reared larval cod otoliths are shown to be related to the water temperature (T) at the time of otolith precipitation. This relationship is curvilinear, and is best described by a simple exponential equation of the form (Sr/Ca x 1000 = a exp(-T/b). We show that when Sr/Ca elemental analyses are related to the daily growth increments in the larval otoliths, relative temperature histories of individual field-caught larvae can be reconstructed from the egg stage to the time of capture. We present preliminary examples of how such reconstructed temperature histories of Atlantic cod Gadus morhua larvae, collected on Georges Bank during April and May 1993, may be interpreted in relation to the broad-scale larval distributions and the hydrography of the Bank.
Resumo:
In the past ten years, reading comprehension instruction has received significant attention from educational researchers. Drawing on studies from cognitive psychology, reader response theory, and language arts research, current best practice in reading comprehension instruction is characterized by a strategies approach in which students are taught to think like proficient readers who visualize, infer, activate schema, question, and summarize as they read. Studies investigating the impact of comprehension strategy instruction on student achievement in reading suggest that when implemented consistently the intervention has a positive effect on achievement. Research also shows, however, that few teachers embrace this approach to reading instruction despite its effectiveness, even when the conditions for substantive professional development (i.e. prolonged engagement, support, resources, time) are present. The interpretive case study reported in this dissertation examined the year-long experience of one fourth grade teacher, Ellen, as she leanled about comprehension strategy instruction and attempted to integrate the approach in her reading program. The goal of the study was to extend current understanding of the factors that support or inhibit an individual teacher's instructional decision making. The research explored how Ellen's academic preparation, beliefs about reading comprehension instruction, and attitudes toward teacher-student interaction influenced her efforts to employ comprehension strategy instruction. Qualitative methods were the basis of this study's research design. The primary methods for collecting data included pre- and post-interviews, field notes from classroom observations and staff development sessions, infonnal interviews, e-mail correspondence, and artifacts such as reading assignments, professional writing, school newsletters, and photographs of the classroom. Transcripts from interviews, as well as field notes, e-mail, and artifacts, were analyzed according to grounded theory's constant-comparative method. The results of the study suggest that three factors were pivotal in Ellen's successful implementation of reading strategy instruction: Pedagogical beliefs, classroom relationships, and professional community. Research on instructional change generally focuses on issues of time, resources, feedback, and follow-through. The research reported here recognizes the importance of these components, but expands contemporary thinking by showing how, in Ellen's case, a teacher's existing theories, her relationship with her students, and her professional interaction with peers impact instructional decisions.
Resumo:
Background. This study was planned at a time when important questions were being raised about the adequacy of using one hormone to treat hypothyroidism instead of two. Specifically, this trial aimed to replicate prior findings which suggested that substituting 12.5 μg of liothyronine for 50 μg of levothyroxine might improve mood, cognition, and physical symptoms. Additionally, this trial aimed to extend findings to fatigue. ^ Methods. A randomized, double-blind, two-period, crossover design was used. Hypothyroid patients stabilized on levothyroxine were invited to participate. Thirty subjects were recruited and randomized. Sequence one received their standard levothyroxine dose in one capsule and placebo in another during the first six weeks. Sequence two received their usual levothyroxine dose minus 50 μg in one capsule and 10 μg of liothyronine in another. At the end of the first six week period, subjects were crossed over. T tests were used to assess carry-over and treatment effects. ^ Results. Twenty-seven subjects completed the trial. The majority of completers had an autoimmune etiology. Mean baseline levothyroxine dose was 121 μg/d (±26.0). Subjects reported small increases in fatigue as measured by the Piper Fatigue Scale (0.9, p = 0.09) and in symptoms of depression measured by the Beck Depression Inventory-II (2.3, p = 0.16) as well as the General Health Questionnaire-30 (4.7, p = 0.14) while treated with substitution treatment. However, none of these differences was statistically significant. Measures of working memory were essentially unchanged between treatments. Thyroid stimulating hormone was about twice as high during substitution treatment (p = 0.16). Free thyroxine index was reduced by 0.7 (p < 0.001), and total serum thyroxine was reduced by 3.0 (p < 0.001) while serum triiodothyronine was increased by 20.5 (p < 0.001) on substitution treatment. ^ Conclusions. Substituting an equivalent amount of liothyronine for a portion of levothyroxine in patients with hypothyroidism does not decrease fatigue, symptoms of depression, or improve working memory. However, due to changes in serum hormone levels and small increments in fatigue and depression symptoms on substitution treatment, a question was raised about the role of T3 in the serum. ^
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
The PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) magnetic resonance imaging (MRI) technique has inherent advantages over other fast imaging methods, including robust motion correction, reduced image distortion, and resistance to off-resonance effects. These features make PROPELLER highly desirable for T2*-sensitive imaging, high-resolution diffusion imaging, and many other applications. However, PROPELLER has been predominantly implemented as a fast spin-echo (FSE) technique, which is insensitive to T2* contrast, and requires time-inefficient signal averaging to achieve adequate signal-to-noise ratio (SNR) for many applications. These issues presently constrain the potential clinical utility of FSE-based PROPELLER. ^ In this research, our aim was to extend and enhance the potential applications of PROPELLER MRI by developing a novel multiple gradient echo PROPELLER (MGREP) technique that can overcome the aforementioned limitations. The MGREP pulse sequence was designed to acquire multiple gradient-echo images simultaneously, without any increase in total scan time or RF energy deposition relative to FSE-based PROPELLER. A new parameter was also introduced for direct user-control over gradient echo spacing, to allow variable sensitivity to T2* contrast. In parallel to pulse sequence development, an improved algorithm for motion correction was also developed and evaluated against the established method through extensive simulations. The potential advantages of MGREP over FSE-based PROPELLER were illustrated via three specific applications: (1) quantitative T2* measurement, (2) time-efficient signal averaging, and (3) high-resolution diffusion imaging. Relative to the FSE-PROPELLER method, the MGREP sequence was found to yield quantitative T2* values, increase SNR by ∼40% without any increase in acquisition time or RF energy deposition, and noticeably improve image quality in high-resolution diffusion maps. In addition, the new motion algorithm was found to improve the performance considerably in motion-artifact reduction. ^ Overall, this work demonstrated a number of enhancements and extensions to existing PROPELLER techniques. The new technical capabilities of PROPELLER imaging, developed in this thesis research, are expected to serve as the foundation for further expanding the scope of PROPELLER applications. ^
Resumo:
Cross-sectional designs, longitudinal designs in which a single cohort is followed over time, and mixed-longitudinal designs in which several cohorts are followed for a shorter period are compared by their precision, potential for bias due to age, time and cohort effects, and feasibility. Mixed longitudinal studies have two advantages over longitudinal studies: isolation of time and age effects and shorter completion time. Though the advantages of mixed-longitudinal studies are clear, choosing an optimal design is difficult, especially given the number of possible combinations of the number of cohorts and number of overlapping intervals between cohorts. The purpose of this paper is to determine the optimal design for detecting differences in group growth rates.^ The type of mixed-longitudinal study appropriate for modeling both individual and group growth rates is called a "multiple-longitudinal" design. A multiple-longitudinal study typically requires uniform or simultaneous entry of subjects, who are each observed till the end of the study.^ While recommendations for designing pure-longitudinal studies have been made by Schlesselman (1973b), Lefant (1990) and Helms (1991), design recommendations for multiple-longitudinal studies have never been published. It is shown that by using power analyses to determine the minimum number of occasions per cohort and minimum number of overlapping occasions between cohorts, in conjunction with a cost model, an optimal multiple-longitudinal design can be determined. An example of systolic blood pressure values for cohorts of males and cohorts of females, ages 8 to 18 years, is given. ^
Resumo:
The tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) is an obvious carcinogen for lung cancer. Since CBMN (Cytokinesis-blocked micronucleus) has been found to be extremely sensitive to NNK-induced genetic damage, it is a potential important factor to predict the lung cancer risk. However, the association between lung cancer and NNK-induced genetic damage measured by CBMN assay has not been rigorously examined. ^ This research develops a methodology to model the chromosomal changes under NNK-induced genetic damage in a logistic regression framework in order to predict the occurrence of lung cancer. Since these chromosomal changes were usually not observed very long due to laboratory cost and time, a resampling technique was applied to generate the Markov chain of the normal and the damaged cell for each individual. A joint likelihood between the resampled Markov chains and the logistic regression model including transition probabilities of this chain as covariates was established. The Maximum likelihood estimation was applied to carry on the statistical test for comparison. The ability of this approach to increase discriminating power to predict lung cancer was compared to a baseline "non-genetic" model. ^ Our method offered an option to understand the association between the dynamic cell information and lung cancer. Our study indicated the extent of DNA damage/non-damage using the CBMN assay provides critical information that impacts public health studies of lung cancer risk. This novel statistical method could simultaneously estimate the process of DNA damage/non-damage and its relationship with lung cancer for each individual.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
Complex diseases, such as cancer, are caused by various genetic and environmental factors, and their interactions. Joint analysis of these factors and their interactions would increase the power to detect risk factors but is statistically. Bayesian generalized linear models using student-t prior distributions on coefficients, is a novel method to simultaneously analyze genetic factors, environmental factors, and interactions. I performed simulation studies using three different disease models and demonstrated that the variable selection performance of Bayesian generalized linear models is comparable to that of Bayesian stochastic search variable selection, an improved method for variable selection when compared to standard methods. I further evaluated the variable selection performance of Bayesian generalized linear models using different numbers of candidate covariates and different sample sizes, and provided a guideline for required sample size to achieve a high power of variable selection using Bayesian generalize linear models, considering different scales of number of candidate covariates. ^ Polymorphisms in folate metabolism genes and nutritional factors have been previously associated with lung cancer risk. In this study, I simultaneously analyzed 115 tag SNPs in folate metabolism genes, 14 nutritional factors, and all possible genetic-nutritional interactions from 1239 lung cancer cases and 1692 controls using Bayesian generalized linear models stratified by never, former, and current smoking status. SNPs in MTRR were significantly associated with lung cancer risk across never, former, and current smokers. In never smokers, three SNPs in TYMS and three gene-nutrient interactions, including an interaction between SHMT1 and vitamin B12, an interaction between MTRR and total fat intake, and an interaction between MTR and alcohol use, were also identified as associated with lung cancer risk. These lung cancer risk factors are worthy of further investigation.^
Resumo:
The infant mortality rate (IMR) is considered to be one of the most important indices of a country's well-being. Countries around the world and other health organizations like the World Health Organization are dedicating their resources, knowledge and energy to reduce the infant mortality rates. The well-known Millennium Development Goal 4 (MDG 4), whose aim is to archive a two thirds reduction of the under-five mortality rate between 1990 and 2015, is an example of the commitment. ^ In this study our goal is to model the trends of IMR between the 1950s to 2010s for selected countries. We would like to know how the IMR is changing overtime and how it differs across countries. ^ IMR data collected over time forms a time series. The repeated observations of IMR time series are not statistically independent. So in modeling the trend of IMR, it is necessary to account for these correlations. We proposed to use the generalized least squares method in general linear models setting to deal with the variance-covariance structure in our model. In order to estimate the variance-covariance matrix, we referred to the time-series models, especially the autoregressive and moving average models. Furthermore, we will compared results from general linear model with correlation structure to that from ordinary least squares method without taking into account the correlation structure to check how significantly the estimates change.^
Resumo:
This study evaluated a modified home-based model of family preservation services, the long-term community case management model, as operationalized by a private child welfare agency that serves as the last resort for hard-to-serve families with children at severe risk of out-of-home placement. The evaluation used a One-Group Pretest-Posttest design with a modified time-series design to determine if the intervention would produce a change over time in the composite score of each family's Child Well-Being Scales (CWBS). A comparison of the mean CWBS scores of the 208 families and subsets of these families at the pretest and various posttests showed a statistically significant decrease in the CWBS scores, indicating decreased risk factors. The longer the duration of services, the greater the statistically significant risk reduction. The results support the conclusion that the families who participate in empowerment-oriented community case management, with the option to extend service duration to resolve or ameliorate chronic family problems, have experienced effective strengthening in family functioning.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Objetivos: reducir pérdidas durante la conservación frigorífica, emplear atmósfera modificada como método suplementario a la refrigeración, alargar el período de aptitud comercial. Metodología: se trabajó con fruta acondicionada a 0±1 °C y 90±5 % HR, según las siguientes variantes: 1. testigo: 20 kg fruta a granel sin seleccionar en caja plástica; 2. granel + film PVC: 10 kg de fruta a granel en bandejas de madera más cartón corrugado recubierta con film de PVC; 3. celpack: bandejas de madera recubiertas de cartón corrugado con dos celpack de 23 frutos cada uno; 4. celpack + atmósfera modificada: ídem anterior pero cada celpack en bolsa de polietileno de baja densidad de 20 μ. A partir de los 30 días de conservación se extrajo semanalmente, durante 9 semanas, una muestra de 46 frutos, de los cuales 23 fueron analizados al momento de ser extraídos y los 23 restantes luego de 48 horas de comercialización simulada (sc). Para la evaluación estadística se aplicó análisis de la varianza con el programa SAS (Statistical Analysis System) y se determinaron las diferencias entre tratamientos con el test de Duncan. Para sabor, en cambio, se aplicó una prueba de homogeneidad de P2. La evaluación de sabor se realizó mediante degustación con panel de 5 catadores entrenados. Resultados: Los frutos tenían las siguientes características al inicio de conservación: calibre 61.4 mm, peso 117.8 g, firmeza de pulpa 3.1 kgf, sabor agridulce, contenido de sólidos solubles 17.5 °Bx, acidez 0.78 g ác. málico%g, % cubrimiento 83.69 %. Luego de la conservación frigorífica (97días): % de color de cobertura 95 %. La firmeza de la pulpa en el tratamiento celpack + bolsa se diferencia con valores más altos, media de 2.8 kgf , el resto con media 2.6 kgf. En sc la firmeza es inferior y esta disminución es menor en celpack + bolsa. Sólidos solubles, media 17.21 °Bx, en sc valores con media de un 0.3 % más. Acidez titulable: disminución progresiva, de 0.68 a 0.47 g%g al fin de conservación. Sabor: a partir de los 59 días aumentan los frutos insípidos y desagradables excepto en celpack + bolsa. Síntomas de deshidratación: a partir de los 79 días la única variante que no presenta síntomas es celpack + bolsa. Conclusiones: El acondicionamiento en celpack redujo la incidencia de ataque por mohos (fue el único tratamiento sin ataque durante 94 días); tampoco presentó sabores desagradables y su limitación en conservación se debió a la deshidratación evidente a partir de 74 días. La fruta embalada en celpack + bolsa tuvo mayores valores de resistencia a la presión y 100 % de frutos sin deshidratación a los 94 días de conservación; a partir de 80 días es evidente el ataque de mohos y frutos con sabores desagradables. Las variantes granel y granel + film presentan deterioro por deshidratación a partir de 74 días. La conservación no debería superar 80 días. Celpack + bolsa muestra mejores resultados, con mayores valores de resistencia a la presión que los otros tratamientos; con respecto al sabor, mantiene una mayor proporción de sabor dulce.
Resumo:
Echeverría, Alberdi y Bilbao pueden ser considerados tres de los pensadores más representativos de las lecturas que el siglo XIX hiciera sobre la Revolución independentista en América. Sus construcciones teóricas fueron elaboradas a la luz de las consecuencias políticas de la Revolución, pero también a partir de la lectura de una serie de teóricos políticos franceses que, de manera contemporánea, pensaban el proceso revolucionario francés. Situar a estos autores en el contexto de los debates intelectuales del momento es un modo de ampliar la lectura y de descubrir las principales particularidades de un pensamiento que se define ante las nuevas condiciones de la política moderna.