954 resultados para Regression method


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bertuzzi, R, Bueno, S, Pasqua, LA, Acquesta, FM, Batista, MB, Roschel, H, Kiss, MAPDM, Serrao, JC, Tricoli, V, and Ugrinowitsch, C. Bioenergetics and neuromuscular determinants of the time to exhaustion at velocity corresponding to (V) over dotO(2)max in recreational long-distance runners. J Strength Cond Res 26(8): 2096-2102, 2012-The purpose of this study was to investigate the main bioenergetics and neuromuscular determinants of the time to exhaustion (T-lim) at the velocity corresponding to maximal oxygen uptake in recreational long-distance runners. Twenty runners performed the following tests on 5 different days: (a) maximal incremental treadmill test, (b) 2 submaximal tests to determine running economy and vertical stiffness, (c) exhaustive test to measured the T-lim, (d) maximum dynamic strength test, and (e) muscle power production test. Aerobic and anaerobic energy contributions during the T-lim test were also estimated. The stepwise multiple regression method selected 3 independent variables to explain T-lim variance. Total energy production explained 84.1% of the shared variance (p = 0.001), whereas peak oxygen uptake ((V) over dotO(2)peak) measured during T-lim and lower limb muscle power ability accounted for the additional 10% of the shared variance (p = 0.014). These data suggest that the total energy production, (V) over dotO(2)peak, and lower limb muscle power ability are the main physiological and neuromuscular determinants of T-lim in recreational long-distance runners.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der hier vorliegenden Arbeit wurde am Beispiel der Kraut- und Knollenfäule an Kartoffeln Phytophthora infestans und des Kartoffelkäfers Leptinotarsa decemlineata untersucht, ob durch den Einsatz von Geographischen Informationssystemen (GIS) landwirtschaftliche Schader¬reger¬prognosen für jeden beliebigen Kartoffelschlag in Deutschland erstellt werden können. Um dieses Ziel zu erreichen, wurden die Eingangsparameter (Temperatur und relative Luftfeuchte) der Prognosemodelle für die beiden Schaderreger (SIMLEP1, SIMPHYT1, SIMPHYT3 and SIMBLIGHT1) so aufbereitet, dass Wetterdaten flächendeckend für Deutschland zur Verfügung standen. Bevor jedoch interpoliert werden konnte, wurde eine Regionalisierung von Deutschland in Interpolationszonen durchgeführt und somit Naturräume geschaffen, die einen Vergleich und eine Bewertung der in ihnen liegenden Wetterstationen zulassen. Hierzu wurden die Boden-Klima-Regionen von SCHULZKE und KAULE (2000) modifiziert, an das Wetterstationsnetz angepasst und mit 5 bis 10 km breiten Pufferzonen an der Grenze der Interpolationszonen versehen, um die Wetterstationen so häufig wie möglich verwenden zu können. Für die Interpolation der Wetterdaten wurde das Verfahren der multiplen Regression gewählt, weil dieses im Vergleich zu anderen Verfahren die geringsten Abweichungen zwischen interpolierten und gemessenen Daten aufwies und den technischen Anforderungen am besten entsprach. Für 99 % aller Werte konnten bei der Temperaturberechnung Abweichungen in einem Bereich zwischen -2,5 und 2,5 °C erzielt werden. Bei der Berechnung der relativen Luftfeuchte wurden Abweichungen zwischen -12 und 10 % relativer Luftfeuchte erreicht. Die Mittelwerte der Abweichungen lagen bei der Temperatur bei 0,1 °C und bei der relativen Luftfeuchte bei -1,8 %. Zur Überprüfung der Trefferquoten der Modelle beim Betrieb mit interpolierten Wetterdaten wurden Felderhebungsdaten aus den Jahren 2000 bis 2007 zum Erstauftreten der Kraut- und Knollenfäule sowie des Kartoffelkäfers verwendet. Dabei konnten mit interpolierten Wetterdaten die gleichen und auch höhere Trefferquoten erreicht werden, als mit der bisherigen Berechnungsmethode. Beispielsweise erzielte die Berechnung des Erstauftretens von P. infestans durch das Modell SIMBLIGHT1 mit interpolierten Wetterdaten im Schnitt drei Tage geringere Abweichungen im Vergleich zu den Berechnungen ohne GIS. Um die Auswirkungen interpretieren zu können, die durch Abweichungen der Temperatur und der relativen Luftfeuchte entstanden wurde zusätzlich eine Sensitivitätsanalyse zur Temperatur und relativen Luftfeuchte der verwendeten Prognosemodelle durchgeführt. Die Temperatur hatte bei allen Modellen nur einen geringen Einfluss auf das Prognoseergebnis. Veränderungen der relativen Luftfeuchte haben sich dagegen deutlich stärker ausgewirkt. So lag bei SIMBLIGHT1 die Abweichung durch eine stündliche Veränderung der relativen Luftfeuchte (± 6 %) bei maximal 27 Tagen, wogegen stündliche Veränderungen der Temperatur (± 2 °C) eine Abweichung von maximal 10 Tagen ausmachten. Die Ergebnisse dieser Arbeit zeigen, dass durch die Verwendung von GIS mindestens die gleichen und auch höhere Trefferquoten bei Schaderregerprognosen erzielt werden als mit der bisherigen Verwendung von Daten einer nahegelegenen Wetterstation. Die Ergebnisse stellen einen wesentlichen Fortschritt für die landwirtschaftlichen Schaderregerprognosen dar. Erstmals ist es möglich, bundesweite Prognosen für jeden beliebigen Kartoffelschlag zur Bekämpfung von Schädlingen in der Landwirtschaft bereit zu stellen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using survey data from natural experiments in three countries that simultaneously received food aid sourced locally and from the United States, we test the hypothesis that locally-sourced commodities are more culturally appropriate and thus preferred over traditional food aid commodities sourced from the donor country. We use a semi-nonparametric regression method to estimate recipients' satisfaction with these commodities across a range of criteria. We establish that recipients of locally procured rations are generally more satisfied with the commodities they receive than are recipients of US-sourced foods. This pattern is especially pronounced among less-well-off recipients. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study is to assess the structural and cross-cultural validity of the KIDSCREEN-27 questionnaire. METHODS: The 27-item version of the KIDSCREEN instrument was derived from a longer 52-item version and was administered to young people aged 8-18 years in 13 European countries in a cross-sectional survey. Structural and cross-cultural validity were tested using multitrait multi-item analysis, exploratory and confirmatory factor analysis, and Rasch analyses. Zumbo's logistic regression method was applied to assess differential item functioning (DIF) across countries. Reliability was assessed using Cronbach's alpha. RESULTS: Responses were obtained from n = 22,827 respondents (response rate 68.9%). For the combined sample from all countries, exploratory factor analysis with procrustean rotations revealed a five-factor structure which explained 56.9% of the variance. Confirmatory factor analysis indicated an acceptable model fit (RMSEA = 0.068, CFI = 0.960). The unidimensionality of all dimensions was confirmed (INFIT: 0.81-1.15). Differential item functioning (DIF) results across the 13 countries showed that 5 items presented uniform DIF whereas 10 displayed non-uniform DIF. Reliability was acceptable (Cronbach's alpha = 0.78-0.84 for individual dimensions). CONCLUSIONS: There was substantial evidence for the cross-cultural equivalence of the KIDSCREEN-27 across the countries studied and the factor structure was highly replicable in individual countries. Further research is needed to correct scores based on DIF results. The KIDSCREEN-27 is a new short and promising tool for use in clinical and epidemiological studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Roads and highways present a unique challenge to wildlife as they exhibit substantial impacts on the surrounding ecosystem through the interruption of a number of ecological processes. With new roads added to the national highway system every year, an understanding of these impacts is required for effective mitigation of potential environmental impacts. A major contributor to these negative effects is the deposition of chemicals used in winter deicing activities to nearby surface waters. These chemicals often vary in composition and may affect freshwater species differently. The negative impacts of widespread deposition of sodium chloride (NaCl) have prompted a search for an `environmentally friendly' alternative. However, little research has investigated the potential environmental effects of widespread use of these alternatives. Herein, I detail the results of laboratory tests and field surveys designed to determine the impacts of road salt (NaCl) and other chemical deicers on amphibian communities in Michigan's Upper Peninsula. Using larval amphibians I demonstrate the lethal impacts of a suite of chemical deicers on this sensitive, freshwater species. Larval wood frogs (Lithobates sylvatica) were tolerant of short-term (96 hours) exposure to urea (CH4N2O), sodium chloride (NaCl), and magnesium chloride (MgCl2). However, these larvae were very sensitive to acetate products (C8H12CaMgO8, CH3COOK) and calcium chloride (CaCl2). These differences in tolerance suggest that certain deicers may be more harmful to amphibians than others. Secondly, I expanded this analysis to include an experiment designed to determine the sublethal effects of chronic exposure to environmentally realistic concentrations of NaCl on two unique amphibian species, L. sylvatica and green frogs (L. clamitans). L. sylvatica tend to breed in small, ephemeral wetlands and metamorphose within a single season. However, L. clamitans breed primarily in more permanent wetlands and often remain as tadpoles for one year or more. These species employ different life history strategies in this region which may influence their response to chronic NaCl exposure. Both species demonstrated potentially harmful effects on individual fitness. L. sylvatica larvae had a high incidence of edema suggesting the NaCl exposure was a significant physiologic stressor to these larvae. L. clamitans larvae reduced tail length during their exposure which may affect adult fitness of these individuals. In order to determine the risk local amphibians face when using these roadside pools, I conducted a survey of the spatial distribution of chloride in the three northernmost counties of Michigan. This area receives a relatively low amount of NaCl which is confined to state and federal highways. The chloride concentrations in this region were much lower than those in urban systems; however, amphibians breeding in the local area may encounter harmful chloride levels arising from temporal variations in hydroperiods. Spatial variation of chloride levels suggests the road-effect zone for amphibians may be as large as 1000 m from a salt-treated highway. Lastly, I performed an analysis of the use of specific conductance to predict chloride concentrations in natural surface water bodies. A number of studies have used this regression to predict chloride concentrations from measurements of specific conductance. This method is often chosen in the place of ion chromatography due to budget and time constraints. However, using a regression method to characterize this relationship does not result in accurate chloride ion concentration estimates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recurrent airway obstruction (RAO), or heaves, is a naturally occurring asthma-like disease that is related to sensitisation and exposure to mouldy hay and has a familial basis with a complex mode of inheritance. A genome-wide scanning approach using two half-sibling families was taken in order to locate the chromosome regions that contribute to the inherited component of this condition in these families. Initially, a panel of 250 microsatellite markers, which were chosen as a well-spaced, polymorphic selection covering the 31 equine autosomes, was used to genotype the two half-sibling families, which comprised in total 239 Warmblood horses. Subsequently, supplementary markers were added for a total of 315 genotyped markers. Each half-sibling family is focused around a severely RAO-affected stallion, and the phenotype of each individual was assessed for RAO and related signs, namely, breathing effort at rest, breathing effort at work, coughing, and nasal discharge, using an owner-based questionnaire. Analysis using a regression method for half-sibling family structures was performed using RAO and each of the composite clinical signs separately; two chromosome regions (on ECA13 and ECA15) showed a genome-wide significant association with RAO at P < 0.05. An additional 11 chromosome regions showed a more modest association. This is the first publication that describes the mapping of genetic loci involved in RAO. Several candidate genes are located in these regions, a number of which are interleukins. These are important signalling molecules that are intricately involved in the control of the immune response and are therefore good positional candidates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article aims to analyze the occurrence of changes in the countryside, in Brazil, from the observation of variables such as type of job, income, family profile, access to consumer goods, services, and information and communication technology. There had been used exploratory analysis and logistic regression method, based on data from the Pesquisa Nacional de Amostra de Domicílios (Brazilian National Household Sample Survey - PNAD) for the state of Espirito Santo. The study found that about 27.2 of individuals who had lived in the countryside report an urban profile, revealing that the urban lifestyle is encompassed not only in the city

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article aims to analyze the occurrence of changes in the countryside, in Brazil, from the observation of variables such as type of job, income, family profile, access to consumer goods, services, and information and communication technology. There had been used exploratory analysis and logistic regression method, based on data from the Pesquisa Nacional de Amostra de Domicílios (Brazilian National Household Sample Survey - PNAD) for the state of Espirito Santo. The study found that about 27.2 of individuals who had lived in the countryside report an urban profile, revealing that the urban lifestyle is encompassed not only in the city

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article aims to analyze the occurrence of changes in the countryside, in Brazil, from the observation of variables such as type of job, income, family profile, access to consumer goods, services, and information and communication technology. There had been used exploratory analysis and logistic regression method, based on data from the Pesquisa Nacional de Amostra de Domicílios (Brazilian National Household Sample Survey - PNAD) for the state of Espirito Santo. The study found that about 27.2 of individuals who had lived in the countryside report an urban profile, revealing that the urban lifestyle is encompassed not only in the city

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a linear regression method for estimating Weibull parameters from life tests. The method uses stochastic models of the unreliability at each failure instant. As a result, a heteroscedastic regression problem arises that is solved by weighted least squares minimization. The main feature of our method is an innovative s-normalization of the failure data models, to obtain analytic expressions of centers and weights for the regression. The method has been Monte Carlo contrasted with Benard?s approximation, and Maximum Likelihood Estimation; and it has the highest global scores for its robustness, and performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo propone una serie de algoritmos con el objetivo de extraer información de conjuntos de datos con redes de neuronas. Se estudian dichos algoritmos con redes de neuronas Enhenced Neural Networks (ENN), debido a que esta arquitectura tiene algunas ventajas cuando se aproximan funciones mediante redes neuronales. En la red ENN los pesos de la matriz principal varián con cada patrón, por lo que se comete un error menor en la aproximación. Las redes de neuronas ENN reúnen la información en los pesos de su red auxiliar, se propone un método para obtener información de la red a través de dichos pesos en formas de reglas y asignando un factor de certeza de dichas reglas. La red ENN obtiene un error cuadrático medio menor que el error teórico de una aproximación matemática por ejemplo mediante polinomios de Taylor. Se muestra como una red ENN, entrenada a partir un conjunto de patrones obtenido de una función de variables reales, sus pesos asociados tienen unas relaciones similares a las que se veri_can con las variables independientes con dicha función de variables reales. Las redes de neuronas ENN aproximan polinomios, se extrae conocimiento de un conjunto de datos de forma similar a la regresión estadística, resolviendo de forma más adecuada el problema de multicolionalidad en caso de existir. Las relaciones a partir de los pesos asociados de la matriz de la red auxiliar se obtienen similares a los coeficientes de una regresión para el mismo conjunto numérico. Una red ENN entrenada a partir de un conjunto de datos de una función boolena extrae el conocimiento a partir de los pesos asociados, y la influencia de las variables de la regla lógica de la función booleana, queda reejada en esos pesos asociados a la red auxiliar de la red ENN. Se plantea una red de base radial (RBF) para la clasificación y predicción en problemas forestales y agrícolas, obteniendo mejores resultados que con el modelo de regresión y otros métodos. Los resultados con una red RBF mejoran al método de regresión si existe colinealidad entre los datos que se dispone y no son muy numerosos. También se detecta que variables tienen más importancia en virtud de la variable pronóstico. Obteniendo el error cuadrático medio con redes RBF menor que con otros métodos, en particular que con el modelo de regresión. Abstract A series of algorithms is proposed in this study aiming at the goal of producing information about data groups with a neural network. These algorithms are studied with Enheced Neural Networks (ENN), owing to the fact that this structure shows sever advantages when the functions are approximated by neural networks. Main matrix weights in th ENN vary on each pattern; so, a smaller error is produced when approximating. The neural network ENN joins the weight information contained in their auxiliary network. Thus, a method to obtain information on the network through those weights is proposed by means of rules adding a certainty factor. The net ENN obtains a mean squared error smaller than the theorical one emerging from a mathematical aproximation such as, for example, by means of Taylor's polynomials. This study also shows how in a neural network ENN trained from a set of patterns obtained through a function of real variables, its associated weights have relationships similar to those ones tested by means of the independent variables connected with such functions of real variables. The neural network ENN approximates polynomials through it information about a set of data may be obtained in a similar way than through statistical regression, solving in this way possible problems of multicollinearity in a more suitable way. Relationships emerging from the associated weights in the auxiliary network matrix obtained are similar to the coeficients corresponding to a regression for the same numerical set. A net ENN trained from a boolean function data set obtains its information from its associated weights. The inuence of the variables of the boolean function logical rule are reected on those weights associated to the net auxiliar of the ENN. A radial basis neural networks (RBF) for the classification and prediction of forest and agricultural problems is proposed. This scheme obtains better results than the ones obtained by means of regression and other methods. The outputs with a net RBF better the regression method if the collineality with the available data and their amount is not very large. Detection of which variables are more important basing on the forecast variable can also be achieved, obtaining a mean squared error smaller that the ones obtained through other methods, in special the one produced by the regression pattern.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the biggest challenges that software developers face is to make an accurate estimate of the project effort. Radial basis function neural networks have been used to software effort estimation in this work using NASA dataset. This paper evaluates and compares radial basis function versus a regression model. The results show that radial basis function neural network have obtained less Mean Square Error than the regression method.