887 resultados para Analysis and statistical methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present work was to evaluate the nutritional composition of mushrooms produced in alternative substrates in agricultural and agro-industrial residues from the Amazon. C, N, pH, moisture, soluble solids, protein, lipids, total fibers, ashes, carbohydrates and energy were determined. Substrates were formulated from Simarouba amara Aubl. and Ochroma piramidale Cav. ex. Lam. Sawdust and from Bactris gasipaes Kunth and Saccharum officinarum stipe. Results showed that the nutritional composition of P. ostreatus varied according to the cultivation substrate and that it can be considered important food due to its nutritional characteristics such as: high protein content; metabolizable carbohydrates and fiber; and low lipids and calories contents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, through the use of thermal analysis techniques, the thermal stabilities of some antioxidants were investigated, in order to evaluate their resistance to thermal oxidation in oils, by heating canola vegetable oil, and to suggest that antioxidants would be more appropriate to increase the resistance of vegetable oils in the thermal degradation process in frying. The techniques used were: Thermal Gravimetric (TG) and Differential Scanning Calorimetry (DSC) analyses, as well as an allusion to a possible protective action of the vegetable oils, based on the thermal oxidation of canola vegetable oil in the laboratory under constant heating at 180 ºC/8 hours for 10 days. The studied antioxidants were: ascorbic acid, sorbic acid, citric acid, sodium erythorbate, BHT (3,5-di-tert-butyl-4-hydroxytoluene), BHA (2, 3-tert-butyl-4-methoxyphenol), TBHQ (tertiary butyl hydroquinone), PG (propyl gallate) - described as antioxidants by ANVISA and the FDA; and also the phytic acid antioxidant and the SAIB (sucrose acetate isobutyrate) additive, which is used in the food industry, in order to test its behavior as an antioxidant in vegetable oil. The following antioxidants: citric acid, sodium erythorbate, BHA, BHT, TBHQ and sorbic acid decompose at temperatures below 180 ºC, and therefore, have little protective action in vegetable oils undergoing frying processes. The antioxidants below: phytic acid, ascorbic acid and PG, are the most resistant and begin their decomposition processes at temperatures between 180 and 200 ºC. The thermal analytical techniques have also shown that the SAIB antioxidant is the most resistant to oxidative action, and it can be a useful choice in the thermal decomposition prevention of edible oils, improving stability regarding oxidative processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to verify the hygienic-sanitary working practices and to create and implement a Hazard Analysis Critical Control Point (HACCP) in two lobster processing industries in Pernambuco State, Brazil. The industries studied process frozen whole lobsters, frozen whole cooked lobsters, and frozen lobster tails for exportation. The application of the hygienic-sanitary checklist in the industries analyzed achieved conformity rates over 96% to the aspects evaluated. The use of the Hazard Analysis Critical Control Point (HACCP) plan resulted in the detection of two critical control points (CCPs) including the receiving and classification steps in the processing of frozen lobster and frozen lobster tails, and an additional critical control point (CCP) was detected during the cooking step of processing of the whole frozen cooked lobster. The proper implementation of the Hazard Analysis Critical Control Point (HACCP) plan in the lobster processing industries studied proved to be the safest and most cost-effective method to monitor each critical control point (CCP) hazards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finnish design and consulting companies are delivering robust and cost-efficient steel structures solutions to a large number of manufacturing companies worldwide. Recently introduced EN 1090-2 standard obliges these companies to specify the execution class of steel structures for their customers. This however, requires clarifying, understanding and interpreting the sophisticated procedure of execution class assignment. The objective of this research is to provide a clear explanation and guidance through the process of execution class assignment for a given steel structure and to support the implementation of EN 1090-2 standard in Rejlers Oy, one of Finnish design and consulting companies. This objective is accomplished by creating a guideline for designers that elaborates on the four-step process of the execution class assignment for a steel structure or its part. Steps one to three define the consequence class (projected consequences of structure failure), the service category (hazards associated with the service use exploitation of steel structure) and the production category (manufacturing process peculiarities), based on the ductility class (capacity of structure to withstand deformations) and the behaviour factor (corresponds to structure seismic behaviour). The final step is the execution class assignment taking into account results of previous steps. Main research method is indepth literature review of European standards family for steel structures. Other research approach is a series of interviews of Rejlers Oy representatives and its clients, results of which have been used to evaluate the level of EN 1090-2 awareness. Rejlers Oy will use the developed novel coherent standard implementation guideline to improve its services and to obtain greater customer satisfaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research concerns the Urban Living Idea Contest conducted by Creator Space™ of BASF SE during its 150th anniversary in 2015. The main objectives of the thesis are to provide a comprehensive analysis of the Urban Living Idea Contest (ULIC) and propose a number of improvement suggestions for future years. More than 4,000 data points were collected and analyzed to investigate the functionality of different elements of the contest. Furthermore, a set of improvement suggestions were proposed to BASF SE. Novelty of this thesis lies in the data collection and the original analysis of the contest, which identified its critical elements, as well as the areas that could be improved. The author of this research was a member of the organizing team and involved in the decision making process from the beginning until the end of the ULIC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work was to describe and compare sourcing practices and challenges in different geographies, to discuss possible options to advance sustainability of global sourcing, and to provide examples to answer why sourcing driven by sustainability principles is so challenging to implement. The focus was on comparison between Europe & Asia & South-America from the perspective of sustainability adoption. By analyzing sourcing practices of the case company it was possible to describe main differences and challenges of each continent, available sourcing options, supplier relationships and ways to foster positive chance. In this qualitative case study gathered theoretical material was compared to extensive sourcing practices of case company in a vast supplier network. Sourcing specialist were interviewed and information provided by them analyzed in order to see how different research results and theories are reflecting reality and to find answers to proposed research questions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research indicates that Obsessive-Compulsive Disorder (OCD; DSM-IV-TR, American Psychiatric Association, 2000) is the second most frequent disorder to coincide with Autism Spectrum Disorder (ASD; Leyfer et aI., 2006). Excessive collecting and hoarding are also frequently reported in children with ASD (Berjerot, 2007). Although functional analysis (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) has successfully identified maintaining variables for repetitive behaviours such as of bizarre vocalizations (e.g., Wilder, Masuda, O'Connor, & Baham, 2001), tics (e.g., Scotti, Schulman, & Hojnacki, 1994), and habit disorders (e.g., Woods & Miltenberger, 1996), extant literature ofOCD and functional analysis methodology is scarce (May et aI., 2008). The current studies utilized functional analysis methodology to identify the types of operant functions associated with the OCD-related hoarding behaviour of a child with ASD and examined the efficacy of function-based intervention. Results supported hypotheses of automatic and socially mediated positive reinforcement. A corresponding function-based treatment plan incorporated antecedent strategies and differential reinforcement (Deitz, 1977; Lindberg, Iwata, Kahng, and DeLeon, 1999; Reynolds, 1961). Reductions in problem behaviour were evidenced through use of a multiple baseline across behaviours design and maintained during two-month follow-up. Decreases in symptom severity were also discerned through subjective measures of treatment effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates mathematical details and computational aspects of Metropolis-Hastings reptation quantum Monte Carlo and its variants, in addition to the Bounce method and its variants. The issues that concern us include the sensitivity of these algorithms' target densities to the position of the trial electron density along the reptile, time-reversal symmetry of the propagators, and the length of the reptile. We calculate the ground-state energy and one-electron properties of LiH at its equilibrium geometry for all these algorithms. The importance sampling is performed with a single-determinant large Slater-type orbitals (STO) basis set. The computer codes were written to exploit the efficiencies engineered into modern, high-performance computing software. Using the Bounce method in the calculation of non-energy-related properties, those represented by operators that do not commute with the Hamiltonian, is a novel work. We found that the unmodified Bounce gives good ground state energy and very good one-electron properties. We attribute this to its favourable time-reversal symmetry in its target density's Green's functions. Breaking this symmetry gives poorer results. Use of a short reptile in the Bounce method does not alter the quality of the results. This suggests that in future applications one can use a shorter reptile to cut down the computational time dramatically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the context of multivariate linear regression (MLR) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. In this paper, we propose a general method for constructing exact tests of possibly nonlinear hypotheses on the coefficients of MLR systems. For the case of uniform linear hypotheses, we present exact distributional invariance results concerning several standard test criteria. These include Wilks' likelihood ratio (LR) criterion as well as trace and maximum root criteria. The normality assumption is not necessary for most of the results to hold. Implications for inference are two-fold. First, invariance to nuisance parameters entails that the technique of Monte Carlo tests can be applied on all these statistics to obtain exact tests of uniform linear hypotheses. Second, the invariance property of the latter statistic is exploited to derive general nuisance-parameter-free bounds on the distribution of the LR statistic for arbitrary hypotheses. Even though it may be difficult to compute these bounds analytically, they can easily be simulated, hence yielding exact bounds Monte Carlo tests. Illustrative simulation experiments show that the bounds are sufficiently tight to provide conclusive results with a high probability. Our findings illustrate the value of the bounds as a tool to be used in conjunction with more traditional simulation-based test methods (e.g., the parametric bootstrap) which may be applied when the bounds are not conclusive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.