886 resultados para Type of error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heilkräuter sind während des Trocknungsprozesses zahlreichen Einflüssen ausgesetzt, welche die Qualität des Endproduktes entscheidend beeinflussen. Diese Forschungsarbeit beschäftigt sich mit der Trocknung von Zitronenmelisse (Melissa officinalis .L) zu einem qualitativ hochwertigen Endprodukt. Es werden Strategien zur Trocknung vorgeschlagen, die experimentelle und mathematische Aspekte mit einbeziehen, um bei einer adäquaten Produktivität die erforderlichen Qualitätsmerkmale im Hinblick auf Farbeänderung und Gehalt an ätherischen Ölen zu erzielen. Getrocknete Zitronenmelisse kann zurzeit, auf Grund verschiedener Probleme beim Trocknungsvorgang, den hohen Qualitätsanforderungen des Marktes nicht immer genügen. Es gibt keine standardisierten Informationen zu den einzelnen und komplexen Trocknungsparametern. In der Praxis beruht die Trocknung auf Erfahrungswerten, bzw. werden Vorgehensweisen bei der Trocknung anderer Pflanzen kopiert, und oftmals ist die Trocknung nicht reproduzierbar, oder beruht auf subjektiven Annäherungen. Als Folge dieser nicht angepassten Wahl der Trocknungsparameter entstehen oftmals Probleme wie eine Übertrocknung, was zu erhöhten Bruchverlusten der Blattmasse führt, oder eine zu geringe Trocknung, was wiederum einen zu hohen Endfeuchtegehalt im Produkt zur Folge hat. Dies wiederum mündet zwangsläufig in einer nicht vertretbaren Farbänderung und einen übermäßigen Verlust an ätherischen Ölen. Auf Grund der unterschiedlichen thermischen und mechanischen Eigenschaften von Blättern und Stängel, ist eine ungleichmäßige Trocknung die Regel. Es wird außerdem eine unnötig lange Trocknungsdauer beobachtet, die zu einem erhöhten Energieverbrauch führt. Das Trocknen in solaren Tunneln Trocknern bringt folgendes Problem mit sich: wegen des ungeregelten Strahlungseinfalles ist es schwierig die Trocknungstemperatur zu regulieren. Ebenso beeinflusst die Strahlung die Farbe des Produktes auf Grund von photochemischen Reaktionen. Zusätzlich erzeugen die hohen Schwankungen der Strahlung, der Temperatur und der Luftfeuchtigkeit instabile Bedingungen für eine gleichmäßige und kontrollierbare Trocknung. In Anbetracht der erwähnten Probleme werden folgende Forschungsschwerpunkte in dieser Arbeit gesetzt: neue Strategien zur Verbesserung der Qualität werden entwickelt, mit dem Ziel die Trocknungszeit und den Energieverbrauch zu verringern. Um eine Methodik vorzuschlagen, die auf optimalen Trocknungsparameter beruht, wurden Temperatur und Luftfeuchtigkeit als Variable in Abhängigkeit der Trocknungszeit, des ätherischer Ölgehaltes, der Farbänderung und der erforderliche Energie betrachtet. Außerdem wurden die genannten Parametern und deren Auswirkungen auf die Qualitätsmerkmale in solaren Tunnel Trocknern analysiert. Um diese Ziele zu erreichen, wurden unterschiedliche Ansätze verfolgt. Die Sorption-Isothermen und die Trocknungskinetik von Zitronenmelisse und deren entsprechende Anpassung an verschiedene mathematische Modelle wurden erarbeitet. Ebenso wurde eine alternative gestaffelte Trocknung in gestufte Schritte vorgenommen, um die Qualität des Endproduktes zu erhöhen und gleichzeitig den Gesamtenergieverbrauch zu senken. Zusätzlich wurde ein statistischer Versuchsplan nach der CCD-Methode (Central Composite Design) und der RSM-Methode (Response Surface Methodology) vorgeschlagen, um die gewünschten Qualitätsmerkmalen und den notwendigen Energieeinsatz in Abhängigkeit von Lufttemperatur und Luftfeuchtigkeit zu erzielen. Anhand der gewonnenen Daten wurden Regressionsmodelle erzeugt, und das Verhalten des Trocknungsverfahrens wurde beschrieben. Schließlich wurde eine statistische DOE-Versuchsplanung (design of experiments) angewandt, um den Einfluss der Parameter auf die zu erzielende Produktqualität in einem solaren Tunnel Trockner zu bewerten. Die Wirkungen der Beschattung, der Lage im Tunnel, des Befüllungsgrades und der Luftgeschwindigkeit auf Trocknungszeit, Farbänderung und dem Gehalt an ätherischem Öl, wurde analysiert. Ebenso wurden entsprechende Regressionsmodelle bei der Anwendung in solaren Tunneltrocknern erarbeitet. Die wesentlichen Ergebnisse werden in Bezug auf optimale Trocknungsparameter in Bezug auf Qualität und Energieverbrauch analysiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunteers (9 women and 11 men), aged between 19 and 31 years, took part in the study. The experiment consisted of two sessions in which participants had to identify the emotion of static (photographs) and dynamic (videos) displays of facial expressions on the computer screen. The mean accuracy was submitted to an Anova for repeated measures of model: 2 sexes x [2 conditions x 4 expressions x 4 intensities]. We observed an advantage for the recognition of dynamic expressions of happiness and fear compared to the static stimuli (p < .05). Analysis of interactions showed that expressions with intensity of 25 % were better recognized in the dynamic condition (p < .05). The addition of motion contributes to improve recognition especially in male participants (p < .05). We concluded that the effect of the motion varies as a function of the type of emotion, intensity of the expression and sex of the participant. These results support the hypothesis that dynamic stimuli have more ecological validity and are more appropriate to the research with emotions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to improve the prediction of the quantity and type of Volatile Fatty Acids (VFA) produced from fermented substrate in the rumen of lactating cows. A model was formulated that describes the conversion of substrate (soluble carbohydrates, starch, hemi-cellulose, cellulose, and protein) into VFA (acetate, propionate, butyrate, and other VFA). Inputs to the model were observed rates of true rumen digestion of substrates, whereas outputs were observed molar proportions of VFA in rumen fluid. A literature survey generated data of 182 diets (96 roughage and 86 concentrate diets). Coefficient values that define the conversion of a specific substrate into VFA were estimated meta-analytically by regression of the model against observed VFA molar proportions using non-linear regression techniques. Coefficient estimates significantly differed for acetate and propionate production in particular, between different types of substrate and between roughage and concentrate diets. Deviations of fitted from observed VFA molar proportions could be attributed to random error for 100%. In addition to regression against observed data, simulation studies were performed to investigate the potential of the estimation method. Fitted coefficient estimates from simulated data sets appeared accurate, as well as fitted rates of VFA production, although the model accounted for only a small fraction (maximally 45%) of the variation in VFA molar proportions. The simulation results showed that the latter result was merely a consequence of the statistical analysis chosen and should not be interpreted as an indication of inaccuracy of coefficient estimates. Deviations between fitted and observed values corresponded to those obtained in simulations. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little has so far been reported on the performance of the near-far resistant CDMA detectors in the presence of the synchronization errors. Starting with the general mathematical model of matched filters, this paper examines the effects of three classes of synchronization errors (i.e. time-delay errors, carrier phase errors, and carrier frequency errors) on the performance (bit error rate and near-far resistance) of an emerging type of near-far resistant coherent DS/SSMA detectors, i.e. the linear decorrelating detector (LDD). For comparison, the corresponding results for the conventional detector are also presented. It is shown that the LDD can still maintain a considerable performance advantage over the conventional detector even when some synchronization errors exist. Finally, several computer simulations are carried out to verify the theoretical conclusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dynamic, mechanistic model of enteric fermentation was used to investigate the effect of type and quality of grass forage, dry matter intake (DMI) and proportion of concentrates in dietary dry matter (DM) on variation in methane (CH(4)) emission from enteric fermentation in dairy cows. The model represents substrate degradation and microbial fermentation processes in rumen and hindgut and, in particular, the effects of type of substrate fermented and of pH oil the production of individual volatile fatty acids and CH, as end-products of fermentation. Effects of type and quality of fresh and ensiled grass were evaluated by distinguishing two N fertilization rates of grassland and two stages of grass maturity. Simulation results indicated a strong impact of the amount and type of grass consumed oil CH(4) emission, with a maximum difference (across all forage types and all levels of DM 1) of 49 and 77% in g CH(4)/kg fat and protein corrected milk (FCM) for diets with a proportion of concentrates in dietary DM of 0.1 and 0.4, respectively (values ranging from 10.2 to 19.5 g CH(4)/kg FCM). The lowest emission was established for early Cut, high fertilized grass silage (GS) and high fertilized grass herbage (GH). The highest emission was found for late cut, low-fertilized GS. The N fertilization rate had the largest impact, followed by stage of grass maturity at harvesting and by the distinction between GH and GS. Emission expressed in g CH(4)/kg FCM declined oil average 14% with an increase of DMI from 14 to 18 kg/day for grass forage diets with a proportion of concentrates of 0.1, and on average 29% with an increase of DMI from 14 to 23 kg/day for diets with a proportion of concentrates of 0.4. Simulation results indicated that a high proportion of concentrates in dietary DM may lead to a further reduction of CH, emission per kg FCM mainly as a result of a higher DM I and milk yield, in comparison to low concentrate diets. Simulation results were evaluated against independent data obtained at three different laboratories in indirect calorimetry trials with COWS consuming GH mainly. The model predicted the average of observed values reasonably, but systematic deviations remained between individual laboratories and root mean squared prediction error was a proportion of 0.12 of the observed mean. Both observed and predicted emission expressed in g CH(4)/kg DM intake decreased upon an increase in dietary N:organic matter (OM) ratio. The model reproduced reasonably well the variation in measured CH, emission in cattle sheds oil Dutch dairy farms and indicated that oil average a fraction of 0.28 of the total emissions must have originated from manure under these circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study reports on an investigation into adult and child interactions observed in the outdoor play environment in four Local Authority early years foundation stage settings in England. In this instance the common two features across the settings were the presence of tricycles and a timetabled outdoor play period. In total, across the four schools, there were 204 children. The study aimed to gain an understanding of the nature of the dialogues between staff and children, that is, the types of exchange that occurred when either the child approached an adult or the adult approached a child. The most frequent type of utterance was also analysed. The study concludes that adults in these settings spoke more than children and the greatest type of utterance was that of the adult about domestic matters. When the child initiated the conversation there were more extended child utterances than domestic utterances. This may suggest that children wish to be involved in conversations of depth and meaning and that staff need to become aware of how to develop this conversational language with children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between valuations and the subsequent sale price continues to be a matter of both theoretical and practical interest. This paper reports the analysis of over 700 property sales made during the 1974/90 period. Initial results imply an average under-valuation of 7% and a standard error of 18% across the sample. A number of techniques are applied to the data set using other variables such as the region, the type of property and the return from the market to explain the difference between the valuation and the subsequent sale price. The analysis reduces the unexplained error; the bias is fully accounted for and the standard error is reduced to 15.3%. This model finds that about 6% of valuations over-estimated the sale price by more than 20% and about 9% of the valuations under-estimated the sale prices by more than 20%. The results suggest that valuations are marginally more accurate than might be expected, both from consideration of theoretical considerations and from comparison with the equivalent valuation in equity markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An obese-type human microbiota with an increased Firmicutes:Bacteroidetes ratio has been described that may link the gut microbiome with obesity and metabolic syndrome (MetS) development. Dietary fat and carbohydrate are modifiable risk factors that may impact on MetS by altering the human microbiome composition. We determined the effect of the amount and type of dietary fat and carbohydrate on faecal bacteria and short chain fatty acid (SCFA) concentrations in people ‘at risk’ of MetS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-resolution ensemble simulations (Δx = 1 km) are performed with the Met Office Unified Model for the Boscastle (Cornwall, UK) flash-flooding event of 16 August 2004. Forecast uncertainties arising from imperfections in the forecast model are analysed by comparing the simulation results produced by two types of perturbation strategy. Motivated by the meteorology of the event, one type of perturbation alters relevant physics choices or parameter settings in the model's parametrization schemes. The other type of perturbation is designed to account for representativity error in the boundary-layer parametrization. It makes direct changes to the model state and provides a lower bound against which to judge the spread produced by other uncertainties. The Boscastle has genuine skill at scales of approximately 60 km and an ensemble spread which can be estimated to within ∼ 10% with only eight members. Differences between the model-state perturbation and physics modification strategies are discussed, the former being more important for triggering and the latter for subsequent cell development, including the average internal structure of convective cells. Despite such differences, the spread in rainfall evaluated at skilful scales is shown to be only weakly sensitive to the perturbation strategy. This suggests that relatively simple strategies for treating model uncertainty may be sufficient for practical, convective-scale ensemble forecasting.