977 resultados para Instrumental-variable Methods
Resumo:
In recent years, declines of honey bee populations have received massive media attention worldwide, yet attempts to understand the causes have been hampered by a lack of standardisation of laboratory techniques. Published as a response to this, the COLOSS BEEBOOK is a unique collaborative venture involving 234 bee scientists from 34 countries, who have produced the definitive guide to how to carry out research on honey bees. It is hoped that these volumes will become the standards to be adopted by bee scientists worldwide. Volume I includes approximately 1,100 separate protocols dealing with the study of the honey bee, Apis mellifera. These cover anatomy, behavioural studies, chemical ecology, breeding, genetics, instrumental insemination and queen rearing, pollination, molecular studies, statistics, toxicology and numerous other techniques
Resumo:
Conservation and monitoring of forest biodiversity requires reliable information about forest structure and composition at multiple spatial scales. However, detailed data about forest habitat characteristics across large areas are often incomplete due to difficulties associated with field sampling methods. To overcome this limitation we employed a nationally available light detection and ranging (LiDAR) remote sensing dataset to develop variables describing forest landscape structure across a large environmental gradient in Switzerland. Using a model species indicative of structurally rich mountain forests (hazel grouse Bonasa bonasia), we tested the potential of such variables to predict species occurrence and evaluated the additional benefit of LiDAR data when used in combination with traditional, sample plot-based field variables. We calibrated boosted regression trees (BRT) models for both variable sets separately and in combination, and compared the models’ accuracies. While both field-based and LiDAR models performed well, combining the two data sources improved the accuracy of the species’ habitat model. The variables retained from the two datasets held different types of information: field variables mostly quantified food resources and cover in the field and shrub layer, LiDAR variables characterized heterogeneity of vegetation structure which correlated with field variables describing the understory and ground vegetation. When combined with data on forest vegetation composition from field surveys, LiDAR provides valuable complementary information for encompassing species niches more comprehensively. Thus, LiDAR bridges the gap between precise, locally restricted field-data and coarse digital land cover information by reliably identifying habitat structure and quality across large areas.
Resumo:
PURPOSE Recent advances in optogenetics and gene therapy have led to promising new treatment strategies for blindness caused by retinal photoreceptor loss. Preclinical studies often rely on the retinal degeneration 1 (rd1 or Pde6b(rd1)) retinitis pigmentosa (RP) mouse model. The rd1 founder mutation is present in more than 100 actively used mouse lines. Since secondary genetic traits are well-known to modify the phenotypic progression of photoreceptor degeneration in animal models and human patients with RP, negligence of the genetic background in the rd1 mouse model is unwarranted. Moreover, the success of various potential therapies, including optogenetic gene therapy and prosthetic implants, depends on the progress of retinal degeneration, which might differ between rd1 mice. To examine the prospect of phenotypic expressivity in the rd1 mouse model, we compared the progress of retinal degeneration in two common rd1 lines, C3H/HeOu and FVB/N. METHODS We followed retinal degeneration over 24 weeks in FVB/N, C3H/HeOu, and congenic Pde6b(+) seeing mouse lines, using a range of experimental techniques including extracellular recordings from retinal ganglion cells, PCR quantification of cone opsin and Pde6b transcripts, in vivo flash electroretinogram (ERG), and behavioral optokinetic reflex (OKR) recordings. RESULTS We demonstrated a substantial difference in the speed of retinal degeneration and accompanying loss of visual function between the two rd1 lines. Photoreceptor degeneration and loss of vision were faster with an earlier onset in the FVB/N mice compared to C3H/HeOu mice, whereas the performance of the Pde6b(+) mice did not differ significantly in any of the tests. By postnatal week 4, the FVB/N mice expressed significantly less cone opsin and Pde6b mRNA and had neither ERG nor OKR responses. At 12 weeks of age, the retinal ganglion cells of the FVB/N mice had lost all light responses. In contrast, 4-week-old C3H/HeOu mice still had ERG and OKR responses, and we still recorded light responses from C3H/HeOu retinal ganglion cells until the age of 24 weeks. These results show that genetic background plays an important role in the rd1 mouse pathology. CONCLUSIONS Analogous to human RP, the mouse genetic background strongly influences the rd1 phenotype. Thus, different rd1 mouse lines may follow different timelines of retinal degeneration, making exact knowledge of genetic background imperative in all studies that use rd1 models.
Resumo:
Keel bone damage (KBD) is a critical issue facing the laying hen industry today as a result of the likely pain leading to compromised welfare and the potential for reduced productivity. Recent reports suggest that damage, while highly variable and likely dependent on a host of factors, extends to all systems (including battery cages, furnished cages, and non-cage systems), genetic lines, and management styles. Despite the extent of the problem, the research community remains uncertain as to the causes and influencing factors of KBD. Although progress has been made investigating these factors, the overall effort is hindered by several issues related to the assessment of KBD, including quality and variation in the methods used between research groups. These issues prevent effective comparison of studies, as well as difficulties in identifying the presence of damage leading to poor accuracy and reliability. The current manuscript seeks to resolve these issues by offering precise definitions for types of KBD, reviewing methods for assessment, and providing recommendations that can improve the accuracy and reliability of those assessments.
Resumo:
Current statistical methods for estimation of parametric effect sizes from a series of experiments are generally restricted to univariate comparisons of standardized mean differences between two treatments. Multivariate methods are presented for the case in which effect size is a vector of standardized multivariate mean differences and the number of treatment groups is two or more. The proposed methods employ a vector of independent sample means for each response variable that leads to a covariance structure which depends only on correlations among the $p$ responses on each subject. Using weighted least squares theory and the assumption that the observations are from normally distributed populations, multivariate hypotheses analogous to common hypotheses used for testing effect sizes were formulated and tested for treatment effects which are correlated through a common control group, through multiple response variables observed on each subject, or both conditions.^ The asymptotic multivariate distribution for correlated effect sizes is obtained by extending univariate methods for estimating effect sizes which are correlated through common control groups. The joint distribution of vectors of effect sizes (from $p$ responses on each subject) from one treatment and one control group and from several treatment groups sharing a common control group are derived. Methods are given for estimation of linear combinations of effect sizes when certain homogeneity conditions are met, and for estimation of vectors of effect sizes and confidence intervals from $p$ responses on each subject. Computational illustrations are provided using data from studies of effects of electric field exposure on small laboratory animals. ^
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.
Resumo:
La acumulación de material sólido en embalses, cauces fluviales y en zonas marítimas hace que la extracción mecánica de estos materiales por medio de succión sea cada vez mas frecuente, por ello resulta importante estudiar el rendimiento de la succión de estos materiales analizando la forma de las boquillas y los parámetros del flujo incluyendo la bomba. Esta tesis estudia, mediante equipos experimentales, la eficacia de distintos dispositivos de extracción de sólidos (utilizando boquillas de diversas formas y bombas de velocidad variable). El dispositivo experimental ha sido desarrollado en el Laboratorio de Hidráulica de la E.T.S.I. de Caminos, C. y P. de la Universidad Politécnica de Madrid. Dicho dispositivo experimental incluye un lecho sumergido de distintos tipos de sedimentos, boquillas de extracción de sólidos y bomba de velocidad variable, así como un elemento de separación del agua y los sólidos extraídos. Los parámetros básicos analizados son el caudal líquido total bombeado, el caudal sólido extraído, diámetro de la tubería de succión, forma y sección de la boquilla extractora, así como los parámetros de velocidad y rendimiento en la bomba de velocidad variable. Los resultados de las medidas obtenidas en el dispositivo experimental han sido estudiados por medio del análisis dimensional y con métodos estadísticos. A partir de este estudio se ha desarrollado una nueva formulación, que relaciona el caudal sólido extraído con los diámetros de tubería y boquilla, caudal líquido bombeado y velocidad de giro de la bomba. Así mismo, desde el punto de vista práctico, se han analizado la influencia de la forma de la boquilla con la capacidad de extracción de sólidos a igualdad del resto de los parámetros, de forma que se puedan recomendar que forma de la boquilla es la más apropiada. The accumulation of solid material in reservoirs, river channels and sea areas causes the mechanical extraction of these materials by suction is becoming much more common, so it is important to study the performance of the suction of these materials analyzing the shape of the nozzles and flow parameters, including the pump. This thesis studies, using experimental equipment, the effectiveness of different solids removal devices (using nozzles of different shapes and variable speed pumps). The experimental device was developed at the Hydraulics Laboratory of the Civil University of the Polytechnic University of Madrid. The device included a submerged bed with different types of sediment solids, different removal nozzles and variable speed pump. It also includes a water separation element and the solids extracted. The key parameters analyzed are the total liquid volume pumped, the solid volume extracted, diameter of the suction pipe, a section of the nozzle and hood, and the parameters of speed and efficiency of the variable speed pump. The basic parameters analyzed are the total liquid volume pumped, the removed solid volume, the diameter of the suction pipe, the shape and cross-section of the nozzle, and the parameters of speed, efficiency and energy consumed by the variable speed pump. The measurements obtained on the experimental device have been studied with dimensional analysis and statistical methods. The outcome of this study is a new formulation, which relates the solid volume extracted with the pipe and nozzle diameters, the pumped liquid flow and the speed of the pump. Also, from a practical point of view, the influence of the shape of the nozzle was compared with the solid extraction capacity, keeping equal the rest of the parameters. So, a recommendation of the best shape of the nozzle can be given.
Resumo:
Los estudios realizados hasta el momento para la determinación de la calidad de medida del instrumental geodésico han estado dirigidos, fundamentalmente, a las medidas angulares y de distancias. Sin embargo, en los últimos años se ha impuesto la tendencia generalizada de utilizar equipos GNSS (Global Navigation Satellite System) en el campo de las aplicaciones geomáticas sin que se haya establecido una metodología que permita obtener la corrección de calibración y su incertidumbre para estos equipos. La finalidad de esta Tesis es establecer los requisitos que debe satisfacer una red para ser considerada Red Patrón con trazabilidad metrológica, así como la metodología para la verificación y calibración de instrumental GNSS en redes patrón. Para ello, se ha diseñado y elaborado un procedimiento técnico de calibración de equipos GNSS en el que se han definido las contribuciones a la incertidumbre de medida. El procedimiento, que se ha aplicado en diferentes redes para distintos equipos, ha permitido obtener la incertidumbre expandida de dichos equipos siguiendo las recomendaciones de la Guide to the Expression of Uncertainty in Measurement del Joint Committee for Guides in Metrology. Asimismo, se han determinado mediante técnicas de observación por satélite las coordenadas tridimensionales de las bases que conforman las redes consideradas en la investigación, y se han desarrollado simulaciones en función de diversos valores de las desviaciones típicas experimentales de los puntos fijos que se han utilizado en el ajuste mínimo cuadrático de los vectores o líneas base. Los resultados obtenidos han puesto de manifiesto la importancia que tiene el conocimiento de las desviaciones típicas experimentales en el cálculo de incertidumbres de las coordenadas tridimensionales de las bases. Basándose en estudios y observaciones de gran calidad técnica, llevados a cabo en estas redes con anterioridad, se ha realizado un exhaustivo análisis que ha permitido determinar las condiciones que debe satisfacer una red patrón. Además, se han diseñado procedimientos técnicos de calibración que permiten calcular la incertidumbre expandida de medida de los instrumentos geodésicos que proporcionan ángulos y distancias obtenidas por métodos electromagnéticos, ya que dichos instrumentos son los que van a permitir la diseminación de la trazabilidad metrológica a las redes patrón para la verificación y calibración de los equipos GNSS. De este modo, ha sido posible la determinación de las correcciones de calibración local de equipos GNSS de alta exactitud en las redes patrón. En esta Tesis se ha obtenido la incertidumbre de la corrección de calibración mediante dos metodologías diferentes; en la primera se ha aplicado la propagación de incertidumbres, mientras que en la segunda se ha aplicado el método de Monte Carlo de simulación de variables aleatorias. El análisis de los resultados obtenidos confirma la validez de ambas metodologías para la determinación de la incertidumbre de calibración de instrumental GNSS. ABSTRACT The studies carried out so far for the determination of the quality of measurement of geodetic instruments have been aimed, primarily, to measure angles and distances. However, in recent years it has been accepted to use GNSS (Global Navigation Satellite System) equipment in the field of Geomatic applications, for data capture, without establishing a methodology that allows obtaining the calibration correction and its uncertainty. The purpose of this Thesis is to establish the requirements that a network must meet to be considered a StandardNetwork with metrological traceability, as well as the methodology for the verification and calibration of GNSS instrumental in those standard networks. To do this, a technical calibration procedure has been designed, developed and defined for GNSS equipment determining the contributions to the uncertainty of measurement. The procedure, which has been applied in different networks for different equipment, has alloweddetermining the expanded uncertainty of such equipment following the recommendations of the Guide to the Expression of Uncertainty in Measurement of the Joint Committee for Guides in Metrology. In addition, the three-dimensional coordinates of the bases which constitute the networks considered in the investigationhave been determined by satellite-based techniques. There have been several developed simulations based on different values of experimental standard deviations of the fixed points that have been used in the least squares vectors or base lines calculations. The results have shown the importance that the knowledge of experimental standard deviations has in the calculation of uncertainties of the three-dimensional coordinates of the bases. Based on high technical quality studies and observations carried out in these networks previously, it has been possible to make an exhaustive analysis that has allowed determining the requirements that a standard network must meet. In addition, technical calibration procedures have been developed to allow the uncertainty estimation of measurement carried outby geodetic instruments that provide angles and distances obtained by electromagnetic methods. These instruments provide the metrological traceability to standard networks used for verification and calibration of GNSS equipment. As a result, it has been possible the estimation of local calibration corrections for high accuracy GNSS equipment in standardnetworks. In this Thesis, the uncertainty of calibration correction has been calculated using two different methodologies: the first one by applying the law of propagation of uncertainty, while the second has applied the propagation of distributions using the Monte Carlo method. The analysis of the obtained results confirms the validity of both methodologies for estimating the calibration uncertainty of GNSS equipment.
Resumo:
This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori.
Resumo:
Objective: In this study, the authors assessed the effects of a structured, moderate-intensity exercise program during the entire length of pregnancy on a woman’s method of delivery. Methods: A randomized controlled trial was conducted with 290 healthy pregnant Caucasian (Spanish) women with a singleton gestation who were randomly assigned to either an exercise (n=138) or a control (n=152) group. Pregnancy outcomes, including the type of delivery, were measured at the end of the pregnancy. Results: The percentage of cesarean and instrumental deliveries in the exercise group were lower than in the control group (15.9%, n=22; 11.6%, n=16 vs. 23%, n=35; 19.1%, n=29, respectively; p=0.03). The overall health status of the newborn as well as other pregnancy outcomes were unaffected. Conclusions: Based on these results, a supervised program of moderate-intensity exercise performed throughout pregnancy was associated with a reduction in the rate of cesarean sections and can be recommended for healthy women in pregnancy.
Resumo:
Mealiness is a negative attribute of sensory texture, characterised by the lack of juiciness without variation of total water content in the tissues. In peaches, mealiness is also known as "woolliness" and "leatheriness". This internal disorder is characterised by the lack of juiciness and flavour. In peaches, it is associated with interna browning near the stone and the incapacity of ripening although there is externa ripe appearance. Woolliness is associated with inadequate cold storage and is considered as a physiological disorder that appears in stone fruits when an unbalanced pectolitic enzyme activity during storage occurs (Kailasapathy and Melton, 1992). Many attempts have been carried out to identify and measure mealiness and woolliness in fruits. The texture of a food product is composed by a wide spectrum of sensory attributes. Consumer defines the texture integrating simultaneously all the sensory attributes. However, an instrument assesses one or several parameters related to a fraction of the texture spectrum (Kramer, 1973). The complexity of sensory analysis by means of trained panels to assess the quality of some producing processes, supports the attempt to estimate texture characteristics by instrumental means. Some studies have been carried out comparing sensory and instrumental methods to assess mealiness and woolliness. The current study is centered on analysis and evaluation of woolliness in peaches and is part of the European project FAIR CT95 0302 "Mealiness in fruits: consumer perception and means for detection". The main objective of this study was to develop procedures to detect woolly peaches by sensory and by instrumental means, as well as to compare both measuring procedures.
Resumo:
En este estudio se ha realizado el diseño de un receptor de una central de Torre Central de energía solar para generación directa de vapor, mediante el uso de métodos numéricos, con un perfil de potencia incidente variable longitudinal y transversalmente. Para ello se ha dividido la geometría del receptor según el método de diferencias finitas, y se ha procedido a resolver las ecuaciones del balance de energía. Una vez resuelto el sistema de ecuaciones se dispone de la distribución de temperaturas en el receptor y se puede proceder a analizar los resultados así como a calcular otros datos de interés. ABSTRACT In this study it has been made a Central Receiver Solar Thermal Power Plant’s Receiver design for direct steam production, by using numerical methods, with a variable longitudinally and transversely income solar power profile. With this propose, the receiver’s geometry has been divided using the finite difference method, and the energy balance equations have been solved. Once the equations system has been solved, the receiver´s temperature distribution is known, and you can analyze the results as well as calculate other interesting data.