884 resultados para Linear coregionalization model
Resumo:
On-going, high-profile public debate about climate change has focussed attention on how to monitor the soil organic carbon stock (C(s)) of rangelands (savannas). Unfortunately, optimal sampling of the rangelands for baseline C(s) - the critical first step towards efficient monitoring - has received relatively little attention to date. Moreover, in the rangelands of tropical Australia relatively little is known about how C(s) is influenced by the practice of cattle grazing. To address these issues we used linear mixed models to: (i) unravel how grazing pressure (over a 12-year period) and soil type have affected C(s) and the stable carbon isotope ratio of soil organic carbon (delta(13)C) (a measure of the relative contributions of C(3) and C(4) vegetation to C(s)); (ii) examine the spatial covariation of C(s) and delta(13)C; and, (iii) explore the amount of soil sampling required to adequately determine baseline C(s). Modelling was done in the context of the material coordinate system for the soil profile, therefore the depths reported, while conventional, are only nominal. Linear mixed models revealed that soil type and grazing pressure interacted to influence C(s) to a depth of 0.3 m in the profile. At a depth of 0.5 m there was no effect of grazing on C(s), but the soil type effect on C(s) was significant. Soil type influenced delta(13)C to a soil depth of 0.5 m but there was no effect of grazing at any depth examined. The linear mixed model also revealed the strong negative correlation of C(s) with delta(13)C, particularly to a depth of 0.1 m in the soil profile. This suggested that increased C(s) at the study site was associated with increased input of C from C(3) trees and shrubs relative to the C(4) perennial grasses; as the latter form the bulk of the cattle diet, we contend that C sequestration may be negatively correlated with forage production. Our baseline C(s) sampling recommendation for cattle-grazing properties of the tropical rangelands of Australia is to: (i) divide the property into units of apparently uniform soil type and grazing management; (ii) use stratified simple random sampling to spread at least 25 soil sampling locations about each unit, with at least two samples collected per stratum. This will be adequate to accurately estimate baseline mean C(s) to within 20% of the true mean, to a nominal depth of 0.3 m in the profile.
Resumo:
Purpose The aim of this study was to determine alterations to the corneal subbasal nerve plexus (SNP) over four years using in vivo corneal confocal microscopy (IVCM) in participants with type 1 diabetes and to identify significant risk factors associated with these alterations. Methods A cohort of 108 individuals with type 1 diabetes and no evidence of peripheral neuropathy at enrollment underwent laser-scanning IVCM, ocular screening, and health and metabolic assessment at baseline and the examinations continued for four subsequent annual visits. At each annual visit, eight central corneal images of the SNP were selected and analyzed to quantify corneal nerve fiber density (CNFD), branch density (CNBD) and fiber length (CNFL). Linear mixed model approaches were fitted to examine the relationship between risk factors and corneal nerve parameters. Results A total of 96 participants completed the final visit and 91 participants completed all visits. No significant relationships were found between corneal nerve parameters and time, sex, duration of diabetes, smoking, alcohol consumption, blood pressure or BMI. However, CNFD was negatively associated with HbA1c (β=-0.76, P<0.01) and age (β=-0.13, P<0.01) and positively related to high density lipids (HDL) (β=2.01, P=0.03). Higher HbA1c (β=-1.58, P=0.04) and age (β=-0.23, P<0.01) also negatively impacted CNBD. CNFL was only affected by higher age (β=-0.06, P<0.01). Conclusions Glycemic control, HDL and age have significant effects on SNP structure. These findings highlight the importance of diabetic management to prevent corneal nerve damage as well as the capability of IVCM for monitoring subclinical alterations in the corneal SNP in diabetes.
Resumo:
Introduction: Extreme heat events (both heat waves and extremely hot days) are increasing in frequency and duration globally and cause more deaths in Australia than any other extreme weather event. Numerous studies have demonstrated a link between extreme heat events and an increased risk of morbidity and death. In this study, the researchers sought to identify if extreme heat events in the Tasmanian population were associated with any changes in emergency department admissions to the Royal Hobart Hospital (RHH) for the period 2003-2010. Methods: Non-identifiable RHH emergency department data and climate data from the Australian Bureau of Meteorology were obtained for the period 2003-2010. Statistical analyses were conducted using the computer statistical computer software ‘R’ with a distributed lag non-linear model (DLNM) package used to fit a quassi-Poisson generalised linear regression model. Results: This study showed that RR of admission to RHH during 2003-2010 was significant over temperatures of 24 C with a lag effect lasting 12 days and main effect noted one day after the extreme heat event. Discussion: This study demonstrated that extreme heat events have a significant impact on public hospital admissions. Two limitations were identified: admissions data rather than presentations data were used and further analysis could be done to compare types of admissions and presentations between heat and non-heat events. Conclusion: With the impacts of climate change already being felt in Australia, public health organisations in Tasmania and the rest of Australia need to implement adaptation strategies to enhance resilience to protect the public from the adverse health effects of heat events and climate change.
Resumo:
Concerns about excessive sediment loads entering the Great Barrier Reef (GBR) lagoon in Australia have led to a focus on improving ground cover in grazing lands. Ground cover has been identified as an important factor in reducing sediment loads, but improving ground cover has been difficult for reef stakeholders in major catchments of the GBR. To provide better information an optimising linear programming model based on paddock scale information in conjunction with land type mapping was developed for the Fitzroy, the largest of the GBR catchments. This identifies at a catchment scale which land types allow the most sediment reduction to be achieved at least cost. The results suggest that from the five land types modelled, the lower productivity land types present the cheapest option for sediment reductions. The study allows more informed decision making for natural resource management organisations to target investments. The analysis highlights the importance of efficient allocation of natural resource management funds in achieving sediment reductions through targeted land type investments. © 2012.
Resumo:
Fusarium wilt of strawberry, incited by Fusarium oxysporum f. sp. fragariae (Fof), is a major disease of the cultivated strawberry (Fragaria xananassa) worldwide. An increase in disease outbreaks of the pathogen in Western Australia and Queensland plus the search for alternative disease management strategies place emphasis on the development of resistant cultivars. In response, a partial incomplete diallel cross involving four parents was performed for use in glasshouse resistance screenings. The resulting progeny were evaluated for their susceptibility to Fof. Best-performing progeny and suitability of progenies as parents were determined using data from disease severity ratings and analyzed using a linear mixed model incorporating a pedigree to produce best linear unbiased predictions of breeding values. Variation in disease response, ranging from highly susceptible to resistant, indicates a quantitative effect. The estimate of the narrow-sense heritability was 0.49 +/- 0.04 (SE), suggesting the population should be responsive to phenotypic recurrent selection. Several progeny genotypes have predicted breeding values higher than any of the parents. Knowledge of Fof resistance derived from this study can help select best parents for future crosses for the development of new strawberry cultivars with Fof resistance.
Resumo:
Congenital long QT syndrome (LQTS) with an estimated prevalence of 1:2000-1:10 000 manifests with prolonged QT interval on electrocardiogram and risk for ventricular arrhythmias and sudden death. Several ion channel genes and hundreds of mutations in these genes have been identified to underlie the disorder. In Finland, four LQTS founder mutations of potassium channel genes account for up to 40-70% of genetic spectrum of LQTS. Acquired LQTS has similar clinical manifestations, but often arises from usage of QT-prolonging medication or electrolyte disturbances. A prolonged QT interval is associated with increased morbidity and mortality not only in clinical LQTS but also in patients with ischemic heart disease and in the general population. The principal aim of this study was to estimate the actual prevalence of LQTS founder mutations in Finland and to calculate their effect on QT interval in the Finnish background population. Using a large population-based sample of over 6000 Finnish individuals from the Health 2000 Survey, we identified LQTS founder mutations KCNQ1 G589D (n=8), KCNQ1 IVS7-2A>G (n=1), KCNH2 L552S (n=2), and KCNH2 R176W (n=16) in 27 study participants. This resulted in a weighted prevalence estimate of 0.4% for LQTS in Finland. Using a linear regression model, the founder mutations resulted in a 22- to 50-ms prolongation of the age-, sex-, and heart rate-adjusted QT interval. Collectively, these data suggest that one of 250 individuals in Finland may be genetically predisposed to ventricular arrhythmias arising from the four LQTS founder mutations. A KCNE1 D85N minor allele with a frequency of 1.4% was associated with a 10-ms prolongation in adjusted QT interval and could thus identify individuals at increased risk of ventricular arrhythmias at the population level. In addition, the previously reported associations of KCNH2 K897T, KCNH2 rs3807375, and NOS1AP rs2880058 with QT interval duration were confirmed in the present study. In a separate study, LQTS founder mutations were identified in a subgroup of acquired LQTS, providing further evidence that congenital LQTS gene mutations may underlie acquired LQTS. Catecholaminergic polymorphic ventricular tachycardia (CPVT) is characterized by exercise-induced ventricular arrhythmias in a structurally normal heart and results from defects in the cardiac Ca2+ signaling proteins, mainly ryanodine receptor type 2 (RyR2). In a patient population of typical CPVT, RyR2 mutations were identifiable in 25% (4/16) of patients, implying that noncoding variants or other genes are involved in CPVT pathogenesis. A 1.1 kb RyR2 exon 3 deletion was identified in two patients independently, suggesting that this region may provide a new target for RyR2-related molecular genetic studies. Two novel RyR2 mutations showing a gain-of-function defect in vitro were identified in three victims of sudden cardiac death. Extended pedigree analyses revealed some surviving mutation carriers with mild structural abnormalities of the heart and resting ventricular arrhythmias suggesting that not all RyR2 mutations lead to a typical CPVT phenotype, underscoring the relevance of tailored risk stratification of a RyR2 mutation carrier.
Resumo:
The constitutive model for a magnetostrictive material and its effect on the structural response is presented in this article. The example of magnetostrictive material considered is the TERFENOL-D. As like the piezoelectric material, this material has two constitutive laws, one of which is the sensing law and the other is the actuation law, both of which are highly coupled and non-linear. For the purpose of analysis, the constitutive laws can be characterized as coupled or uncoupled and linear or non linear. Coupled model is studied without assuming any explicit direct relationship with magnetic field. In the linear coupled model, which is assumed to preserve the magnetic flux line continuity, the elastic modulus, the permeability and magneto-elastic constant are assumed as constant. In the nonlinear-coupled model, the nonlinearity is decoupled and solved separately for the magnetic domain and the mechanical domain using two nonlinear curves, namely the stress vs. strain curve and the magnetic flux density vs. magnetic field curve. This is performed by two different methods. In the first, the magnetic flux density is computed iteratively, while in the second, the artificial neural network is used, where in the trained network will give the necessary strain and magnetic flux density for a given magnetic field and stress level. The effect of nonlinearity is demonstrated on a simple magnetostrictive rod.
Resumo:
The Hybrid approach introduced by the authors for at-site modeling of annual and periodic streamflows in earlier works is extended to simulate multi-site multi-season streamflows. It bears significance in integrated river basin planning studies. This hybrid model involves: (i) partial pre-whitening of standardized multi-season streamflows at each site using a parsimonious linear periodic model; (ii) contemporaneous resampling of the resulting residuals with an appropriate block size, using moving block bootstrap (non-parametric, NP) technique; and (iii) post-blackening the bootstrapped innovation series at each site, by adding the corresponding parametric model component for the site, to obtain generated streamflows at each of the sites. It gains significantly by effectively utilizing the merits of both parametric and NP models. It is able to reproduce various statistics, including the dependence relationships at both spatial and temporal levels without using any normalizing transformations and/or adjustment procedures. The potential of the hybrid model in reproducing a wide variety of statistics including the run characteristics, is demonstrated through an application for multi-site streamflow generation in the Upper Cauvery river basin, Southern India. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.
Resumo:
During 1990 to 2009, Foreign Direct Investment (FDI henceforth) in Finland has fluctuated greatly. This paper focused on analyzing the overall development and basic characteristics of Foreign Direct Investment in Finland, covering the period from 1990 to present. By comparing FDI in Finland with FDI in other countries, the picture of Finland’s FDI position in the world market is clearer. A lot of statistical data, tables and figures are used to describe the trend of Foreign Direct Investment in Finland. All the data used in this study were obtained from Statistics Finland, UNCTAD, OECD, World Bank and International Labor Office, Investment map website and etc. It is also found that there is a big, long-lasting and increasing imbalance of the inward FDI and outward FDI in Finland, the performance of outward FDI is stronger than the inward FDI in Finland. Finland’s position of FDI in the world is rather modest. And based on existing theories, I tried to analyze the factors that might determine the size of the inflows of FDI in Finland. The econometric model of my thesis is based on time series data ranging from 1990 to 2007. A Log linear regression model is adopted to analyze the impact of each variable. The regression results showed that Labor Cost and Investment in Education have a negative influence on the FDI inflows into Finland. Too high labor cost is the main impediment of FDI in Finland, explaining the relative small size of FDI inflows into Finland. GDP and Economy openness have a significant positive impact on the inflows of FDI into Finland; other variables do not emerge as significant factor in affecting the size of FDI inflows in Finland as expected. Meanwhile, the impacts of the most recent financial and economic crisis on FDI in the world and in Finland are discussed as well. FDI inflows worldwide and in Finland have suffered from a big setback from the 2008 global crisis. The economic crisis has undoubtedly significant negative influence on the FDI flows in the world and in Finland. Nevertheless, apart from the negative impact, the crisis itself also brings in chances for policymakers to implement more efficient policies in order to create a pro-business and pro-investment climate for the recovery of FDI inflows. . The correspondent policies and measures aiming to accelerate the recovery of the falling FDI were discussed correspondently.
Resumo:
Linear optimization model was used to calculate seven wood procurement scenarios for years 1990, 2000 and 2010. Productivity and cost functions for seven cutting, five terrain transport, three long distance transport and various work supervision and scaling methods were calculated from available work study reports. All method's base on Nordic cut to length system. Finland was divided in three parts for description of harvesting conditions. Twenty imaginary wood processing points and their wood procurement areas were created for these areas. The procurement systems, which consist of the harvesting conditions and work productivity functions, were described as a simulation model. In the LP-model the wood procurement system has to fulfil the volume and wood assortment requirements of processing points by minimizing the procurement cost. The model consists of 862 variables and 560 restrictions. Results show that it is economical to increase the mechanical work in harvesting. Cost increment alternatives effect only little on profitability of manual work. The areas of later thinnings and seed tree- and shelter wood cuttings increase on cost of first thinnings. In mechanized work one method, 10-tonne one grip harvester and forwarder, is gaining advantage among other methods. Working hours of forwarder are decreasing opposite to the harvester. There is only little need to increase the number of harvesters and trucks or their drivers from today's level. Quite large fluctuations in level of procurement and cost can be handled by constant number of machines, by alternating the number of season workers and by driving machines in two shifts. It is possible, if some environmental problems of large scale summer time harvesting can be solved.
Resumo:
The absorption produced by the audience in concert halls is considered a random variable. Beranek's proposal [L. L. Beranek, Music, Acoustics and Architecture (Wiley, New York, 1962), p. 543] that audience absorption is proportional to the area they occupy and not to their number is subjected to a statistical hypothesis test. A two variable linear regression model of the absorption with audience area and residual area as regressor variables is postulated for concert halls without added absorptive materials. Since Beranek's contention amounts to the statement that audience absorption is independent of the seating density, the test of the hypothesis lies in categorizing halls by seating density and examining for significant differences among slopes of regression planes of the different categories. Such a test shows that Beranek's hypothesis can be accepted. It is also shown that the audience area is a better predictor of the absorption than the audience number. The absorption coefficients and their 95% confidence limits are given for the audience and residual areas. A critique of the regression model is presented.
Resumo:
Given a classical dynamical theory with second-class constraints, it is sometimes possible to construct another theory with first-class constraints, i.e., a gauge-invariant one, which is physically equivalent to the first theory. We identify some conditions under which this may be done, explaining the general principles and working out several examples. Field theoretic applications include the chiral Schwinger model and the non-linear sigma model. An interesting connection with the work of Faddeev and Shatashvili is pointed out.
Resumo:
Background: A nucleosome is the fundamental repeating unit of the eukaryotic chromosome. It has been shown that the positioning of a majority of nucleosomes is primarily controlled by factors other than the intrinsic preference of the DNA sequence. One of the key questions in this context is the role, if any, that can be played by the variability of nucleosomal DNA structure. Results: In this study, we have addressed this question by analysing the variability at the dinucleotide and trinucleotide as well as longer length scales in a dataset of nucleosome X-ray crystal structures. We observe that the nucleosome structure displays remarkable local level structural versatility within the B-DNA family. The nucleosomal DNA also incorporates a large number of kinks. Conclusions: Based on our results, we propose that the local and global level versatility of B-DNA structure may be a significant factor modulating the formation of nucleosomes in the vicinity of high-plasticity genes, and in varying the probability of binding by regulatory proteins. Hence, these factors should be incorporated in the prediction algorithms and there may not be a unique `template' for predicting putative nucleosome sequences. In addition, the multimodal distribution of dinucleotide parameters for some steps and the presence of a large number of kinks in the nucleosomal DNA structure indicate that the linear elastic model, used by several algorithms to predict the energetic cost of nucleosome formation, may lead to incorrect results.