882 resultados para non-linear regression
Resumo:
Background Heatwaves could cause the population excess death numbers to be ranged from tens to thousands within a couple of weeks in a local area. An excess mortality due to a special event (e.g., a heatwave or an epidemic outbreak) is estimated by subtracting the mortality figure under ‘normal’ conditions from the historical daily mortality records. The calculation of the excess mortality is a scientific challenge because of the stochastic temporal pattern of the daily mortality data which is characterised by (a) the long-term changing mean levels (i.e., non-stationarity); (b) the non-linear temperature-mortality association. The Hilbert-Huang Transform (HHT) algorithm is a novel method originally developed for analysing the non-linear and non-stationary time series data in the field of signal processing, however, it has not been applied in public health research. This paper aimed to demonstrate the applicability and strength of the HHT algorithm in analysing health data. Methods Special R functions were developed to implement the HHT algorithm to decompose the daily mortality time series into trend and non-trend components in terms of the underlying physical mechanism. The excess mortality is calculated directly from the resulting non-trend component series. Results The Brisbane (Queensland, Australia) and the Chicago (United States) daily mortality time series data were utilized for calculating the excess mortality associated with heatwaves. The HHT algorithm estimated 62 excess deaths related to the February 2004 Brisbane heatwave. To calculate the excess mortality associated with the July 1995 Chicago heatwave, the HHT algorithm needed to handle the mode mixing issue. The HHT algorithm estimated 510 excess deaths for the 1995 Chicago heatwave event. To exemplify potential applications, the HHT decomposition results were used as the input data for a subsequent regression analysis, using the Brisbane data, to investigate the association between excess mortality and different risk factors. Conclusions The HHT algorithm is a novel and powerful analytical tool in time series data analysis. It has a real potential to have a wide range of applications in public health research because of its ability to decompose a nonlinear and non-stationary time series into trend and non-trend components consistently and efficiently.
Resumo:
Food neophobia is a highly heritable trait characterized by the rejection of foods that are novel or unknown and potentially limits dietary variety, with lower intake and preference particularly for fruits and vegetables. Understanding non-genetic (environmental) factors that may influence the expression of food neophobia is essential to improving children’s consumption of fruits and vegetables and encouraging the adoption of healthier diets. The aim of this study was to examine whether maternal infant feeding beliefs (at four months) were associated with the expression of food neophobia in toddlers and whether controlling feeding practices mediated this relationship. Participants were 244 first-time mothers (M = 30.4, SD = 5.1 years) allocated to the control group of the NOURISH randomized controlled trial. The relationships between infant feeding beliefs (Infant Feeding Questionnaire) at four months and controlling child feeding practices (Child Feeding Questionnaire) and food neophobia (Child Food Neophobia Scale) at 24 months were tested using correlational and multiple linear regression models (adjusted for significant covariates). Higher maternal Concern about infant under-eating and becoming underweight at four months was associated with higher child food neophobia at two years. Similarly, lower Awareness of infant hunger and satiety cues was associated with higher child food neophobia. Both associations were significantly mediated by mothers’ use of Pressure to eat. Intervening early to promote positive feeding practices to mothers may help reduce the use of controlling practices as children develop. Further research that can further elucidate the bi-directional nature of the mother-child feeding relationship is still required.
Resumo:
We present a distinguishing attack against SOBER-128 with linear masking. We found a linear approximation which has a bias of 2^− − 8.8 for the non-linear filter. The attack applies the observation made by Ekdahl and Johansson that there is a sequence of clocks for which the linear combination of some states vanishes. This linear dependency allows that the linear masking method can be applied. We also show that the bias of the distinguisher can be improved (or estimated more precisely) by considering quadratic terms of the approximation. The probability bias of the quadratic approximation used in the distinguisher is estimated to be equal to O(2^− − 51.8), so that we claim that SOBER-128 is distinguishable from truly random cipher by observing O(2^103.6) keystream words.
Resumo:
Firm-customer digital connectedness for effective sensing and responding is a strategic imperative for contemporary competitive firms. This research-in-progress paper conceptualizes and operationalizes the firm-customer mobile digital connectedness of a smart-mobile customer. The empirical investigation focuses on mobile app users and the impact of mobile apps on customer expectations. Based on pilot data collected from 127 customers, we tested hypotheses pertaining to firm-customer mobile digital connectedness and customer expectations. Our test analysis using linear and non-linear postulations reveals those customers raise their expectations as they increase their digital interactions with a firm.
Resumo:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression
Resumo:
Purpose Improved survival for men with prostate cancer has led to increased attention to factors influencing quality of life (QOL). As protein levels of vascular endothelial growth factor (VEGF) and insulin-like growth factor 1 (IGF-1) have been reported to be associated with QOL in people with cancer, we sought to identify whether single-nucleotide polymorphisms (SNPs) of these genes were associated with QOL in men with prostate cancer. Methods Multiple linear regression of two data sets (including approximately 750 men newly diagnosed with prostate cancer and 550 men from the general population) was used to investigate SNPs of VEGF and IGF-1 (10 SNPs in total) for associations with QOL (measured by the SF-36v2 health survey). Results Men with prostate cancer who carried the minor ‘T’ allele for IGF-1 SNP rs35767 had higher mean Role-Physical scale scores (≥0.3 SD) compared to non-carriers (p < 0.05). While this association was not identified in men from the general population, one IGF-1 SNP rs7965399 was associated with higher mean Bodily Pain scale scores in men from the general population that was not found in men with prostate cancer. Men from the general population who carried the rare ‘C’ allele had higher mean Bodily Pain scale scores (≥0.3 SD) than non-carriers (p < 0.05). Conclusions Through identifying SNPs that are associated with QOL in men with prostate cancer and men from the general population, this study adds to the mapping of complex interrelationships that influence QOL and suggests a role for IGF-I in physical QOL outcomes. Future research may identify biomarkers associated with increased risk of poor QOL that could assist in the provision of pre-emptive support for those identified at risk.
Resumo:
Application of "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures permits direct and accurate determination of ultimate system strengths, without resort to simplified elastic methods of analysis and semi-empirical specification equations. However, the application of advanced analysis methods has previously been restricted to steel frames comprising only compact sections that are not influenced by the effects of local buckling. A concentrated plasticity formulation suitable for practical advanced analysis of steel frame structures comprising non-compact sections is presented in this paper. This formulation, referred to as the refined plastic hinge method, implicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling.
Resumo:
In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the tau-leaping framework to past information. Using the theta-trapezoidal tau-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k >= 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.
Resumo:
Background Understanding the relationship between extreme weather events and childhood hand, foot and mouth disease (HFMD) is important in the context of climate change. This study aimed to quantify the relationship between extreme precipitation and childhood HFMD in Hefei, China, and further, to explore whether the association varied across urban and rural areas. Methods Daily data on HFMD counts among children aged 0–14 years from 2010 January 1st to 2012 December 31st were retrieved from Hefei Center for Disease Control and Prevention. Daily data on mean temperature, relative humidity and precipitation during the same period were supplied by Hefei Bureau of Meteorology. We used a Poisson linear regression model combined with a distributed lag non-linear model to assess the association between extreme precipitation (≥ 90th precipitation) and childhood HFMD, controlling for mean temperature, humidity, day of week, and long-term trend. Results There was a statistically significant association between extreme precipitation and childhood HFMD. The effect of extreme precipitation on childhood HFMD was the greatest at six days lag, with a 5.12% (95% confident interval: 2.7–7.57%) increase of childhood HFMD for an extreme precipitation event versus no precipitation. Notably, urban children and children aged 0–4 years were particularly vulnerable to the effects of extreme precipitation. Conclusions Our findings indicate that extreme precipitation may increase the incidence of childhood HFMD in Hefei, highlighting the importance of protecting children from forthcoming extreme precipitation, particularly for those who are young and from urban areas.
Resumo:
Background The number of citations received by an article is considered as an objective marker judging the importance and the quality of the research work. The present study aims to study the determinants of citations for research articles published by Sri Lankan authors. Methods Papers were selectively retrieved from the SciVerse Scopus® (Elsevier Properties S.A, USA) database for 10 years from 1st January 1997 to 31st December 2006, of which 50% were selected for inclusion by simple random sampling. The primary outcome measure was citation rate (defined as the number of citations during the 2 subsequent years after publication). Citation data was collected using the SciVerse Scopus® Citation Analyzer and self citations were excluded. A linear regression analysis was performed with ‘number of citations’ as the continuous dependent variable and other independent variables. Result The number of publications has steadily increased during the period of study. Over three quarter of papers were published in international journals. More than half of publications were research studies (55.3%), and most of the research studies were descriptive cross-sectional studies (27.1%). The mean number of citations within 2 years of publication was 1.7 and 52.1% of papers were not cited within the first two years of publication. The mean number of citations for collaborative studies (2.74) was significantly higher than that of non-collaborative studies (0.66). The mean number of citations did not significantly change depending on whether the publication had a positive result (2.08) or not (2.92) and was also not influenced by the presence (2.30) or absence (1.99) of the main study conclusion in the title of the article. In the linear regression model, the journal rank, number of authors, conducting the study abroad, being a research study or systematic review/meta-analysis and having regional and/or international collaboration all significantly increased the number of citations. Conclusion The journal rank, number of authors, conducting the study abroad, being a research study or systematic review/meta-analysis and having regional and/or international collaboration all significantly increased the number of citations. However, the presence of a positive result in the study did not influence the citation rate.
Resumo:
This study examined the short-term effects of temperature on cardiovascular hospital admissions (CHA) in the largest tropical city in Southern Vietnam. We applied Poisson time-series regression models with Distributed Lag Non-Linear Model (DLNM) to examine the temperature-CHA association while adjusting for seasonal and long-term trends, day of the week, holidays, and humidity. The threshold temperature and added effects of heat waves were also evaluated. The exposure-response curve of temperature-CHA reveals a J-shape relationship with a threshold temperature of 29.6 °C. The delayed effects temperature-CHA lasted for a week (0–5 days). The overall risk of CHA increased 12.9% (RR, 1.129; 95%CI, 0.972–1.311) during heatwave events, which were defined as temperature ≥ the 99th percentile for ≥2 consecutive days. The modification roles of gender and age were inconsistent and non-significant in this study. An additional prevention program that reduces the risk of cardiovascular disease in relation to high temperatures should be developed.
Resumo:
Identifying inequalities in air pollution levels across population groups can help address environmental justice concerns. We were interested in assessing these inequalities across major urban areas in Australia. We used a land-use regression model to predict ambient nitrogen dioxide (NO2) levels and sought the best socio-economic and population predictor variables. We used a generalised least squares model that accounted for spatial correlation in NO2 levels to examine the associations between the variables. We found that the best model included the index of economic resources (IER) score as a non-linear variable and the percentage of non-Indigenous persons as a linear variable. NO2 levels decreased with increasing IER scores (higher scores indicate less disadvantage) in almost all major urban areas, and NO2 also decreased slightly as the percentage of non-Indigenous persons increased. However, the magnitude of differences in NO2 levels was small and may not translate into substantive differences in health.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
The soluble solids content of intact fruit can be measured non-invasively by near infrared spectroscopy, allowing “sweetness” grading of individual fruit. However, little information is available in the literature with respect to the robustness of such calibrations. We developed calibrations based on a restricted wavelength range (700–1100 nm), suitable for use with low-cost silicon detector systems, using a stepwise multiple linear regression routine. Calibrations for total soluble solids (°Brix) in intact pineapple fruit were not transferable between summer and winter growing seasons. A combined calibration (data of three harvest dates) validated reasonably well against a population set drawn from all harvest dates (r2 = 0.72, SEP = 1.84 °Brix). Calibrations for Brix in melon were transferable between two of the three varieties examined. However, a lack of robustness of calibration was indicated by poor validation within populations of fruit harvested at different times. Further work is planned to investigate the robustness of calibration across varieties, growing districts and seasons.
Resumo:
Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.