88 resultados para non-linear regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the finite element modelling of structural frames, external loads such as wind loads, dead loads and imposed loads usually act along the elements rather than at the nodes only. Conventionally, when an element is subjected to these general transverse element loads, they are usually converted to nodal forces acting at the ends of the elements by either lumping or consistent load approaches. In addition, it is especially important for an element subjected to the first- and second-order elastic behaviour, to which the steel structure is critically prone to; in particular the thin-walled steel structures, when the stocky element section may be generally critical to the inelastic behaviour. In this sense, the accurate first- and second-order elastic displacement solutions of element load effect along an element is vitally crucial, but cannot be simulated using neither numerical nodal nor consistent load methods alone, as long as no equilibrium condition is enforced in the finite element formulation, which can inevitably impair the structural safety of the steel structure particularly. It can be therefore regarded as a unique element load method to account for the element load nonlinearly. If accurate displacement solution is targeted for simulating the first- and second-order elastic behaviour on an element on the basis of sophisticated non-linear element stiffness formulation, the numerous prescribed stiffness matrices must indispensably be used for the plethora of specific transverse element loading patterns encountered. In order to circumvent this shortcoming, the present paper proposes a numerical technique to include the transverse element loading in the non-linear stiffness formulation without numerous prescribed stiffness matrices, and which is able to predict structural responses involving the effect of first-order element loads as well as the second-order coupling effect between the transverse load and axial force in the element. This paper shows that the principle of superposition can be applied to derive the generalized stiffness formulation for element load effect, so that the form of the stiffness matrix remains unchanged with respect to the specific loading patterns, but with only the magnitude of the loading (element load coefficients) being needed to be adjusted in the stiffness formulation, and subsequently the non-linear effect on element loadings can be commensurate by updating the magnitude of element load coefficients through the non-linear solution procedures. In principle, the element loading distribution is converted into a single loading magnitude at mid-span in order to provide the initial perturbation for triggering the member bowing effect due to its transverse element loads. This approach in turn sacrifices the effect of element loading distribution except at mid-span. Therefore, it can be foreseen that the load-deflection behaviour may not be as accurate as those at mid-span, but its discrepancy is still trivial as proved. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. Moreover, another significance of this paper is placed on shifting the nodal response (system analysis) to both nodal and element response (sophisticated element formulation). For the conventional finite element method, such as the cubic element, all accurate solutions can be only found at node. It means no accurate and reliable structural safety can be ensured within an element, and as a result, it hinders the engineering applications. The results of the paper are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple frames.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an approach, based on Lean production philosophy, for rationalising the processes involved in the production of specification documents for construction projects. Current construction literature erroneously depicts the process for the creation of construction specifications as a linear one. This traditional understanding of the specification process often culminates in process-wastes. On the contrary, the evidence suggests that though generalised, the activities involved in producing specification documents are nonlinear. Drawing on the outcome of participant observation, this paper presents an optimised approach for representing construction specifications. Consequently, the actors typically involved in producing specification documents are identified, the processes suitable for automation are highlighted and the central role of tacit knowledge is integrated into a conceptual template of construction specifications. By applying the transformation, flow, value (TFV) theory of Lean production the paper argues that value creation can be realised by eliminating the wastes associated with the traditional preparation of specification documents with a view to integrating specifications in digital models such as Building Information Models (BIM). Therefore, the paper presents an approach for rationalising the TFV theory as a method for optimising current approaches for generating construction specifications based on a revised specification writing model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitrogen fertiliser is a major source of atmospheric N2O and over recent years there is growing evidence for a non-linear, exponential relationship between N fertiliser application rate and N2O emissions. However, there is still high uncertainty around the relationship of N fertiliser rate and N2O emissions for many cropping systems. We conducted year-round measurements of N2O emission and lint yield in four N rate treatments (0, 90, 180 and 270 kg N ha-1) in a cotton-fallow rotation on a black vertosol in Australia. We observed a nonlinear exponential response of N2O emissions to increasing N fertiliser rates with cumulative annual N2O emissions of 0.55 kg N ha-1, 0.67kg N ha-1, 1.07 kg N ha-1 and 1.89 kg N ha-1 for the four respective N fertiliser rates while no N response to yield occurred above 180N. The N fertiliser induced annual N2O EF factors increased from 0.13% to 0.29% and 0.50% for the 90N, 180N and 270N treatments respectively, significantly lower than the IPCC Tier 1 default value (1.0 %). This non-linear response suggests that an exponential N2O emissions model may be more appropriate for use in estimating emission of N2O from soils cultivated to cotton in Australia. It also demonstrates that improved agricultural N management practices can be adopted in cotton to substantially reduce N2O emissions without affecting yield potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work has led to the development of empirical mathematical models to quantitatively predicate the changes of morphology in osteocyte-like cell lines (MLO-Y4) in culture. MLO-Y4 cells were cultured at low density and the changes in morphology recorded over 11 hours. Cell area and three dimensional shape features including aspect ratio, circularity and solidity were then determined using widely accepted image analysis software (ImageJTM). Based on the data obtained from the imaging analysis, mathematical models were developed using the non-linear regression method. The developed mathematical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analyzing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After attending this presentation, attendees will gain awareness of the ontogeny of cranial maturation, specifically: (1) the fusion timings of primary ossification centers in the basicranium; and (2) the temporal pattern of closure of the anterior fontanelle, to develop new population-specific age standards for medicolegal death investigation of Australian subadults. This presentation will impact the forensic science community by demonstrating the potential of a contemporary forensic subadult Computed Tomography (CT) database of cranial scans and population data, to recalibrate existing standards for age estimation and quantify growth and development of Australian children. This research welcomes a study design applicable to all countries faced with paucity in skeletal repositories. Accurate assessment of age-at-death of skeletal remains represents a key element in forensic anthropology methodology. In Australian casework, age standards derived from American reference samples are applied in light of scarcity in documented Australian skeletal collections. Currently practitioners rely on antiquated standards, such as the Scheuer and Black1 compilation for age estimation, despite implications of secular trends and population variation. Skeletal maturation standards are population specific and should not be extrapolated from one population to another, while secular changes in skeletal dimensions and accelerated maturation underscore the importance of establishing modern standards to estimate age in modern subadults. Despite CT imaging becoming the gold standard for skeletal analysis in Australia, practitioners caution the application of forensic age standards derived from macroscopic inspection to a CT medium, suggesting a need for revised methodologies. Multi-slice CT scans of subadult crania and cervical vertebrae 1 and 2 were acquired from 350 Australian individuals (males: n=193, females: n=157) aged birth to 12 years. The CT database, projected at 920 individuals upon completion (January 2014), comprises thin-slice DICOM data (resolution: 0.5/0.3mm) of patients scanned since 2010 at major Brisbane Childrens Hospitals. DICOM datasets were subject to manual segmentation, followed by the construction of multi-planar and volume rendering cranial models, for subsequent scoring. The union of primary ossification centers of the occipital bone were scored as open, partially closed or completely closed; while the fontanelles, and vertebrae were scored in accordance with two stages. Transition analysis was applied to elucidate age at transition between union states for each center, and robust age parameters established using Bayesian statistics. In comparison to reported literature, closure of the fontanelles and contiguous sutures in Australian infants occur earlier than reported, with the anterior fontanelle transitioning from open to closed at 16.7±1.1 months. The metopic suture is closed prior to 10 weeks post-partum and completely obliterated by 6 months of age, independent of sex. Utilizing reverse engineering capabilities, an alternate method for infant age estimation based on quantification of fontanelle area and non-linear regression with variance component modeling will be presented. Closure models indicate that the greatest rate of change in anterior fontanelle area occurs prior to 5 months of age. This study complements the work of Scheuer and Black1, providing more specific age intervals for union and temporal maturity of each primary ossification center of the occipital bone. For example, dominant fusion of the sutura intra-occipitalis posterior occurs before 9 months of age, followed by persistence of a hyaline cartilage tongue posterior to the foramen magnum until 2.5 years; with obliteration at 2.9±0.1 years. Recalibrated age parameters for the atlas and axis are presented, with the anterior arch of the atlas appearing at 2.9 months in females and 6.3 months in males; while dentoneural, dentocentral and neurocentral junctions of the axis transitioned from non-union to union at 2.1±0.1 years in females and 3.7±0.1 years in males. These results are an exemplar of significant sexual dimorphism in maturation (p<0.05), with girls exhibiting union earlier than boys, justifying the need for segregated sex standards for age estimation. Studies such as this are imperative for providing updated standards for Australian forensic and pediatric practice and provide an insight into skeletal development of this population. During this presentation, the utility of novel regression models for age estimation of infants will be discussed, with emphasis on three-dimensional modeling capabilities of complex structures such as fontanelles, for the development of new age estimation methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with the study of the equilibrium exchange of ammonium ions with two natural zeolite samples sourced in Australia from Castle Mountain Zeolites and Zeolite Australia. A range of sorption models including Langmuir Vageler, Competitive Langmuir, Freundlich, Temkin, Dubinin Astakhov and Brouers–Sotolongo were applied in order to gain an insight as to the exchange process. In contrast to most previous studies, non-linear regression was used in all instances to determine the best fit of the experimental data. Castle Mountain natural zeolite was found to exhibit higher ammonium capacity than Zeolite Australia material when in the freshly received state, and this behavior was related to the greater amount of sodium ions present relative to calcium ions on the zeolite exchange sites. The zeolite capacity for ammonium ions was also found to be dependent on the solution normality, with 35–60% increase inuptake noted when increasing the ammonium concentration from 250 to 1000 mg/L. The optimal fit ofthe equilibrium data was achieved by the Freundlich expression as confirmed by use of Akaikes Information Criteria. It was emphasized that the bottle-point method chosen influenced the isotherm profile in several ways, and could lead to misleading interpretation of experiments, especially if the constant zeolite mass approach was followed. Pre-treatment of natural zeolite with acid and subsequently sodium hydroxide promoted the uptake of ammonium species by at least 90%. This paper highlighted the factors which should be taken into account when investigating ammonium ion exchange with natural zeolites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: Television viewing time, independent of leisure-time physical activity, has cross-sectional relationships with the metabolic syndrome and its individual components. We examined whether baseline and five-year changes in self-reported television viewing time are associated with changes in continuous biomarkers of cardio-metabolic risk (waist circumference, triglycerides, high density lipoprotein cholesterol, systolic and diastolic blood pressure, fasting plasma glucose; and a clustered cardio-metabolic risk score) in Australian adults. Methods: AusDiab is a prospective, population-based cohort study with biological, behavioral, and demographic measures collected in 1999–2000 and 2004–2005. Non-institutionalized adults aged ≥ 25 years were measured at baseline (11,247; 55% of those completing an initial household interview); 6,400 took part in the five-year follow-up biomedical examination, and 3,846 met the inclusion criteria for this analysis. Multiple linear regression analysis was used and unstandardized B coefficients (95% CI) are provided. Results: Baseline television viewing time (10 hours/week unit) was not significantly associated with change in any of the biomarkers of cardio-metabolic risk. Increases in television viewing time over five years (10 hours/week unit) were associated with increases in: waist circumference (cm) (men: 0.43 (0.08, 0.78), P = 0.02; women: 0.68 (0.30, 1.05), P <0.001), diastolic blood pressure (mmHg) (women: 0.47 (0.02, 0.92), P = 0.04), and the clustered cardio-metabolic risk score (women: 0.03 (0.01, 0.05), P = 0.007). These associations were independent of baseline television viewing time and baseline and change in physical activity and other potential confounders. Conclusion: These findings indicate that an increase in television viewing time is associated with adverse cardio-metabolic biomarker changes. Further prospective studies using objective measures of several sedentary behaviors are required to confirm causality of the associations found.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hot and cold temperatures significantly increase mortality rates around the world, but which measure of temperature is the best predictor of mortality is not known. We used mortality data from 107 US cities for the years 1987–2000 and examined the association between temperature and mortality using Poisson regression and modelled a non-linear temperature effect and a non-linear lag structure. We examined mean, minimum and maximum temperature with and without humidity, and apparent temperature and the Humidex. The best measure was defined as that with the minimum cross-validated residual. We found large differences in the best temperature measure between age groups, seasons and cities, and there was no one temperature measure that was superior to the others. The strong correlation between different measures of temperature means that, on average, they have the same predictive ability. The best temperature measure for new studies can be chosen based on practical concerns, such as choosing the measure with the least amount of missing data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.