31 resultados para Linear coregionalization model

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficiently inducing precise causal models accurately reflecting given data sets is the ultimate goal of causal discovery. The algorithms proposed by Dai et al. has demonstrated the ability of the Minimum Message Length (MML) principle in discovering Linear Causal Models from training data. In order to further explore ways to improve efficiency, this paper incorporates the Hoeffding Bounds into the learning process. At each step of causal discovery, if a small number of data items is enough to distinguish the better model from the rest, the computation cost will be reduced by ignoring the other data items. Experiments with data set from related benchmark models indicate that the new algorithm achieves speedup over previous work in terms of learning efficiency while preserving the discovery accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determining the causal structure of a domain is a key task in the area of Data Mining and Knowledge Discovery.The algorithm proposed by Wallace et al. [15] has demonstrated its strong ability in discovering Linear Causal Models from given data sets. However, some experiments showed that this algorithm experienced difficulty in discovering linear relations with small deviation, and it occasionally gives a negative message length, which should not be allowed. In this paper, a more efficient and precise MML encoding scheme is proposed to describe the model structure and the nodes in a Linear Causal Model. The estimation of different parameters is also derived. Empirical results show that the new algorithm outperformed the previous MML-based algorithm in terms of both speed and precision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One common drawback in algorithms for learning Linear Causal Models is that they can not deal with incomplete data set. This is unfortunate since many real problems involve missing data or even hidden variable. In this paper, based on multiple imputation, we propose a three-step process to learn linear causal models from incomplete data set. Experimental results indicate that this algorithm is better than the single imputation method (EM algorithm) and the simple list deletion method, and for lower missing rate, this algorithm can even find models better than the results from the greedy learning algorithm MLGS working in a complete data set. In addition, the method is amenable to parallel or distributed processing, which is an important characteristic for data mining in large data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis made outstanding contribution in automating the discovery of linear causal models. It introduced a highly efficient discovery algorithm, which implements new encoding, ensemble and accelerating strategies. Theoretic research and experimental work showed that this new discovery algorithm outperforms the previous system in both accuracy and efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The production of carbon fiber, particularly the oxidation/stabilization step, is a complex process. In the present study, a non-linear mathematical model has been developed for the prediction of density of polyacrylonitrile (PAN) and oxidized PAN fiber (OPF), as a key physical property for various applications, such as energy and material optimization, modeling, and design of the stabilization process. The model is based on the available functional groups in PAN and OPF. Expected functional groups, including [Formula presented], [Formula presented], –CH2, [Formula presented], and [Formula presented], were identified and quantified through the full deconvolution analysis of Fourier transform infrared attenuated total reflectance (FT-IR ATR) spectra obtained from fibers. These functional groups form the basis of three stabilization rendering parameters, representing the cyclization, dehydrogenation and oxidation reactions that occur during PAN stabilization, and are used as the independent variables of the non-linear predictive model. The k-fold cross validation approach, with k = 10, has been employed to find the coefficients of the model. This model estimates the density of PAN and OPF independent of operational parameters and can be expanded to all operational parameters. Statistical analysis revealed good agreement between the governing model and experiments. The maximum relative error was less than 1% for the present model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Determining the causal structure of a domain is frequently a key task in the area of Data Mining and Knowledge Discovery. This paper introduces ensemble learning into linear causal model discovery, then examines several algorithms based on different ensemble strategies including Bagging, Adaboost and GASEN. Experimental results show that (1) Ensemble discovery algorithm can achieve an improved result compared with individual causal discovery algorithm in terms of accuracy; (2) Among all examined ensemble discovery algorithms, BWV algorithm which uses a simple Bagging strategy works excellently compared to other more sophisticated ensemble strategies; (3) Ensemble method can also improve the stability of parameter estimation. In addition, Ensemble discovery algorithm is amenable to parallel and distributed processing, which is important for data mining in large data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of "model selection" for expressing a wide range of constitutive behaviour adequately using hot torsion test data was considered here using a heuristic approach. A model library including several nested parametric linear and non-linear models was considered and applied to a set of hot torsion test data for API-X 70 micro-alloyed steel with a range of strain rates and temperatures. A cost function comprising the modelled hot strength data and that of the measured data were utilized in a heuristic model selection scheme to identify the optimum models. It was shown that a non-linear rational model including ten parameters is an optimum model that can accurately express the multiple regimes of hardening and softening for the entire range of the experiment. The parameters for the optimum model were estimated and used for determining variations of hot strength of the samples with deformation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Estimating changes in weight from changes in energy balance is important for predicting the effect of obesity prevention interventions. OBJECTIVE: The objective was to develop and validate an equation for predicting the mean weight of a population of children in response to a change in total energy intake (TEI) or total energy expenditure (TEE). DESIGN: In 963 children with a mean (+/-SD) age of 8.1 +/- 2.8 y (range: 4-18 y) and weight of 31.5 +/- 17.6 kg, TEE was measured by using doubly labeled water. Log weight (dependent variable) and log TEE (independent variable) were analyzed in a linear regression model with height, age, and sex as covariates. It was assumed that points of dynamic balance, called "settling points," occur for populations wherein energy is in balance (TEE = TEI), weight is stable (ignoring growth), and energy flux (EnFlux) equals TEE. RESULTS: TEE (or EnFlux) explained 74% of the variance in weight. The unstandardized regression coefficient was 0.45 (95% CI: 0.38, 0.51; R(2) = 0.86) after including covariates. Conversion into proportional changes (time(1) to time(2)) gave the equation (weight(2)/weight(1)) = (EnFlux(2)/EnFlux(1))(0.45). In 3 longitudinal studies (n = 212; mean follow-up of 3.4 y), the equation predicted the mean follow-up measured weight to within 0.5%. CONCLUSIONS: The relation of EnFlux with weight was positive, which implied that a high TEI (rather than low physical activity and low TEE) was the main determinant of high body weight. Two populations of children with a 10% difference in mean EnFlux would have a 4.5% difference in mean weight.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: The VNTR polymorphism 5' of the insulin gene has been related to obesity in a previous study on children with early onset of severe obesity. Our purpose was to analyze the association between this polymorphism and adiposity variability in an unselected population of children and adolescents in northern France.

Research Methods and Procedures: In 293 nuclear families from the Fleurbaix Laventie Ville Santé study, we genotyped the INS VNTR polymorphism in 431 children and adolescents (8 to 18 years of age) and their parents. Overweight was defined according to the international definition in both children and adults. A transmission disequilibrium test in families with an overweight offspring was performed. The prevalence of overweight was compared according to genotype. The effect of the genotype on BMI and waist circumference was tested with a linear regression model, adjusting for age, gender, and Tanner stage.

Results: There was an undertransmission of class III alleles from heterozygous parents to their overweight offspring (p < 0.002). Overweight was associated with class I alleles in children and adolescents (12% I/I, I/III vs. 3% III/III; p < 0.08). Those with a class III/III genotype had a 1 kg/m2 lower mean BMI (p = 0.04) and 3 cm lower waist circumference (p = 0.02) than those bearing one or two class I alleles. No association of adiposity or obesity with class I alleles was found in parents.

Discussion: INS VNTR polymorphism seems to contribute to differences in adiposity level in the general population of children and adolescents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an innovative strategy to synchronize all virtual clocks in asynchronous Internet environments. Our model is based on the architecture of one reference clock and many slave clocks communicating with each other over the Internet. The paper makes three major contributions to this research area. Firstly, one-way information transmission is applied to reduce traffic overhead on the Internet for the purpose of clock synchronization. Secondly, the slave nodes use local virtual time and the arrival timestamp, from the reference node, to create linear mathematical trend models and to retrieve the clock precision differences between reference clock and slave clocks. Finally, a fault-tolerant and self-adaptive model executed by each slave node based on the above linear trend model is created in order to ensure that the virtual clock is running normally, even when the link between the reference node and this slave node has crashed. We also present detailed simulations of this strategy and mathematical analysis on real Internet environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Efficiency measurement is at the heart of most management accounting functions. Data envelopment analysis (DEA) is a linear programming technique used to measure relative efficiency of organisational units referred in DEA literature as decision making units (DMUs). Universities are complex organisations involving multiple inputs and outputs (Abbott & Doucouliagos, 2008). There is no agreement in identifying and measuring the inputs and outputs of higher education institutes (Avkiran, 2001). Hence, accurate efficiency measurement in such complex institutes needs rigorous research.

Prior DEA studies have investigated the application of the technique at university (Avkiran, 2001; Abbott & Doucouliagos, 2003; Abbott & Doucouliagos, 2008) or department/school (Beasley, 1990; Sinuany-Stern, Mehrez & Barboy, 1994) levels. The organisational unit that has control and hence the responsibility over inputs and outputs is the most appropriate decision making unit (DMU) for DEA to provide useful managerial information. In the current study, DEA has been applied at faculty level for two reasons. First, in the case university, as with most other universities, inputs and outputs are more accurately identified with faculties than departments/schools. Second, efficiency results at university level are highly aggregated and do not provide detail managerial information.

Prior DEA time series studies have used input and output cost and income data without adjusting for changes in time value of money. This study examines the effects of adjusting financial data for changes in dollar values without proportional changes in the quantity of the inputs and the outputs. The study is carried out mainly from management accounting perspective. It is mainly focused on the use of the DEA efficiency information for managerial decision purposes. It is not intended to contribute to the theoretical development of the linear programming model. It takes the view that one does not need to be a mechanic to be a good car driver.

The results suggest that adjusting financial input and output data in time series analysis change efficiency values, rankings, reference set as well as projection amounts. The findings also suggest that the case University could have saved close to $10 million per year if all faculties had operated efficiently. However, it is also recognised that quantitative performance measures have their own limitations and should be used cautiously.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUCTION: Studies that address sensitive topics, such as female sexual difficulty and dysfunction, often achieve poor response rates that can bias  results. Factors that affect response rates to studies in this area are not well characterized.
AIM: To model the response rate in studies investigating the prevalence of female sexual difficulty and dysfunction.
METHODS: Databases were searched for English-language, prevalence studies using the search terms: sexual difficulties/dysfunction, woman/women/female, prevalence, and cross-sectional. Studies that did not report response rates or were clinic-based were excluded. A multiple linear regression model was constructed.
MAIN OUTCOME MEASURES: Published response rates.
RESULTS: A total of 1,380 publications were identified, and 54 of these met our inclusion criteria. Our model explained 58% of the variance in response rates of studies investigating the prevalence of difficulty with desire, arousal, orgasm, or sexual pain (R(2) = 0.581, P = 0.027). This model was based on study design variables, study year, location, and the reported prevalence of each type of sexual difficulty. More recent studies (beta = -1.05, P = 0.037) and studies that only included women over 50 years of age (beta = -31.11, P = 0.007) had lower response rates. The use of face-to-face interviews was associated with a higher response rate (beta = 20.51, P = 0.036). Studies that did not include questions regarding desire difficulties achieved higher response rates than those that did include questions on desire difficulty (beta = 23.70, P = 0.034).
CONCLUSION: Response rates in prevalence studies addressing female sexual difficulty and dysfunction are frequently low and have decreased by an average of just over 1% per anum since the late 60s. Participation may improve by conducting interviews in person. Studies that investigate a broad range of ages may be less representative of older women, due to a poorer response in older age groups. Lower response rates in studies that investigate desire difficulty suggest that sexual desire is a particularly sensitive topic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis explores the elastic behaviour of the mechanical double action press and draw die system commonly used to draw sheet metal components in the automotive industry. High process variability in production and excessive time spent in die try-out are significant problems in automotive stamping. It has previously been suggested that the elastic behaviour of the system may contribute to these problems. However, the mechanical principles that cause the press system to affect the forming process have not been documented in detail. Due to a poor understanding of these problems in industry, the elasticity of the press and tools is currently not considered during the die design. The aim of this work was to explore the physical principles of press system elasticity and determine the extent to which it contributes to problems in try-out and production. On the basis of this analysis methods were developed for controlling or accounting for problems during the design process. The application of frictional restraining force to the edges of the blank during forming depends on the distribution and magnitude of the clamping force between the binders surfaces of the draw die. This is an important control parameter for the deep drawing process. It has been demonstrated in this work that the elasticity of the press and draw die can affect clamping force in two ways. The response of the press system, to the forces produced in the press during forming, causes the magnitude of clamping force to change during the stroke. This was demonstrated using measured data from a production press. A simple linear elastic model of the press system was developed to illustrate a definite link between the measured force variation and the elasticity of the press and tools. The simple model was extended into a finite element model of the complete press system, which was used to control a forming simulation. It was demonstrated that stiffness variation within the system could influence the final strains in a drawn part. At the conclusion of this investigation a method is proposed for assessing the sensitivity of a part to clamping force variation in the press during die design. A means of reducing variation in the press through the addition of a simple linear spring element is also discussed. The second part of the work assessed the influence of tool structure on the distribution of frictional restraining forces to the blank. A forming simulation showed that tool stiffness affects the distribution of clamping pressure between the binders. This was also shown to affect the final strains in a drawn part. However, the most significant influence on restraining force was the tendency of the blank to increase in thickness between the binders during forming. Using a finite element approximation of the try-out process it was shown that the structure of the tool would also contribute to the problems currently experienced in try-out where uneven contact pressure distributions are addressed by manually adjusting the tool surfaces. Finally a generalised approach to designing draw die structures was developed. Simple analysis methods were combined with finite element based topology optimisation techniques to develop a set of basic design guidelines. The aim of the guidelines was to produce a structure with uniform stiffness response to a pressure applied at the binder surface. The work concludes with a recommendation for introducing the methods developed in this thesis into the standard production process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial skins exhibit different mechanical properties in compare to natural skins. This drawback makes physical interaction with artificial skins to be different from natural skin. Increasing the performance of the artificial skins for robotic hands and medical applications is addressed in the present paper. The idea is to add active controls within artificial skins in order to improve their dynamic or static behaviors. This directly results into more interactivity of the artificial skins. To achieve this goal, a piece-wise linear anisotropic model for artificial skins is derived. Then a model of matrix of capacitive MEMS actuators for the control purpose is coupled with the model of artificial skin. Next an active surface shaping control is applied through the control of the capacitive MEMS actuators which shapes the skin with zero error and in a desired time. A simulation study is presented to validate the idea of using MEMS actuator for active artificial skins. In the simulation, we actively control 128 capacitive micro actuators for an artificial fingertip. The fingertip provides the required shape in a required time which means the dynamics of the skin is improved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Epoetin-δ (Dynepo™ Shire Pharmaceuticals, Basing stoke, UK) is a synthetic form of erythropoietin (EPO) whose resemblance with endogenous EPO makes it hard to identify using the classical identification criteria. Urine samples collected from six healthy volunteers treated with epoetin-δ injections and from a control population were immuno-purified and analyzed with the usual IEF method. On the basis of the EPO profiles integration, a linear multivariate model was computed for discriminant analysis. For each sample, a pattern classification algorithm returned a bands distribution and intensity score (bands intensity score) saying how representative this sample is of one of the two classes, positive or negative. Effort profiles were also integrated in the model. The method yielded a good sensitivity versus specificity relation and was used to determine the detection window of the molecule following multiple injections. The bands intensity score, which can be generalized to epoetin-α and epoetin-β, is proposed as an alternative criterion and a supplementary evidence for the identification of EPO abuse.