46 resultados para Artificial nueral network model
Resumo:
Machine learning techniques for prediction and rule extraction from artificial neural network methods are used. The hypothesis that market sentiment and IPO specific attributes are equally responsible for first-day IPO returns in the US stock market is tested. Machine learning methods used are Bayesian classifications, support vector machines, decision tree techniques, rule learners and artificial neural networks. The outcomes of the research are predictions and rules associated With first-day returns of technology IPOs. The hypothesis that first-day returns of technology IPOs are equally determined by IPO specific and market sentiment is rejected. Instead lower yielding IPOs are determined by IPO specific and market sentiment attributes, while higher yielding IPOs are largely dependent on IPO specific attributes.
Resumo:
Hysteresis models that eliminate the artificial pumping errors associated with the Kool-Parker (KP) soil moisture hysteresis model, such as the Parker-Lenhard (PL) method, can be computationally demanding in unsaturated transport models since they need to retain the wetting-drying history of the system. The pumping errors in these models need to be eliminated for correct simulation of cyclical systems (e.g. transport above a tidally forced watertable, infiltration and redistribution under periodic irrigation) if the soils exhibit significant hysteresis. A modification is made here to the PL method that allows it to be more readily applied to numerical models by eliminating the need to store a large number of soil moisture reversal points. The modified-PL method largely eliminates any artificial pumping error and so essentially retains the accuracy of the original PL approach. The modified-PL method is implemented in HYDRUS-1D (version 2.0), which is then used to simulate cyclic capillary fringe dynamics to show the influence of removing artificial pumping errors and to demonstrate the ease of implementation. Artificial pumping errors are shown to be significant for the soils and system characteristics used here in numerical experiments of transport above a fluctuating watertable. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This paper presented a novel approach to develop car following models using reactive agent techniques for mapping perceptions to actions. The results showed that the model outperformed the Gipps and Psychophysical family of car following models. The standing of this work is highlighted by its acceptance and publication in the proceedings of the International IEEE Conference on Intelligent Transportation Systems (ITS), which is now recognised as the premier international conference on ITS. The paper acceptance rate to this conference was 67 percent. The standing of this paper is also evidenced by its listing in international databases like Ei Inspec and IEEE Xplore. The paper is also listed in Google Scholar. Dr Dia co-authored this paper with his PhD student Sakda Panwai.
Resumo:
An analytical approach to the stress development in the coherent dendritic network during solidification is proposed. Under the assumption that stresses are developed in the network as a result of the friction resisting shrinkage-induced interdendritic fluid flow, the model predicts the stresses in the solid. The calculations reflect the expected effects of postponed dendrite coherency, slower solidification conditions, and variations of eutectic volume fraction and shrinkage. Comparing the calculated stresses to the measured shear strength of equiaxed mushy zones shows that it is possible for the stresses to exceed the strength, thereby resulting in reorientation or collapse of the dendritic network.
Resumo:
Keratins are the major structural proteins of keratinocytes, which are the most abundant cell type in the mammalian epidermis. Mutations in epidermal keratin genes have been shown to cause severe blistering skin abnormalities. One such disease, epidermolytic hyperkeratosis (EHK), also known as bullous congenital ichthyosiform erythroderma, occurs as a result of mutations in highly conserved regions of keratins K1 and K10. Patients with EHK first exhibit erythroderma with severe blistering, which later is replaced by thick patches of scaly skin. To assess the effect of a mutated K1 gene on skin biology and to produce an animal model for EHK, we removed 60 residues from the 2B segment of HK1 and observed the effects of its expression in the epidermis of transgenic mice. Phenotypes of the resultant mice closely resembled those observed in the human disease, first with epidermal blisters, then later with hyperkeratotic lesions. In neonatal mice homozygous for the transgene, the skin was thicker, with an increased labeling index, and the spinous cells showed a collapse of the keratin filament network around the nuclei, suggesting that a critical concentration of the mutant HK1, over the endogenous MK1, was required to disrupt the structural integrity of the spinous cells. Additionally, footpad epithelium, which is devoid of hair follicles, showed blistering in the spinous layer, suggesting that hair follicles can stabilize or protect the epidermis from trauma. Blisters were not evident in adult mice, but instead they showed a thick, scaly hyperkeratotic skin with increased mitosis, resulting in an increased number of corneocytes and granular cells. Irregularly shaped keratohyalin granules were also observed. To date, this is the only transgenic model to show the typical morphology found in the adult form of EHK.
Resumo:
Ligaments undergo finite strain displaying hyperelastic behaviour as the initially tangled fibrils present straighten out, combined with viscoelastic behaviour (strain rate sensitivity). In the present study the anterior cruciate ligament of the human knee joint is modelled in three dimensions to gain an understanding of the stress distribution over the ligament due to motion imposed on the ends, determined from experimental studies. A three dimensional, finite strain material model of ligaments has recently been proposed by Pioletti in Ref. [2]. It is attractive as it separates out elastic stress from that due to the present strain rate and that due to the past history of deformation. However, it treats the ligament as isotropic and incompressible. While the second assumption is reasonable, the first is clearly untrue. In the present study an alternative model of the elastic behaviour due to Bonet and Burton (Ref. [4]) is generalized. Bonet and Burton consider finite strain with constant modulii for the fibres and for the matrix of a transversely isotropic composite. In the present work, the fibre modulus is first made to increase exponentially from zero with an invariant that provides a measure of the stretch in the fibre direction. At 12% strain in the fibre direction, a new reference state is then adopted, after which the material modulus is made constant, as in Bonet and Burton's model. The strain rate dependence can be added, either using Pioletti's isotropic approximation, or by making the effect depend on the strain rate in the fibre direction only. A solid model of a ligament is constructed, based on experimentally measured sections, and the deformation predicted using explicit integration in time. This approach simplifies the coding of the material model, but has a limitation due to the detrimental effect on stability of integration of the substantial damping implied by the nonlinear dependence of stress on strain rate. At present, an artificially high density is being used to provide stability, while the dynamics are being removed from the solution using artificial viscosity. The result is a quasi-static solution incorporating the effect of strain rate. Alternate approaches to material modelling and integration are discussed, that may result in a better model.
Resumo:
This paper proposed a novel model for short term load forecast in the competitive electricity market. The prior electricity demand data are treated as time series. The forecast model is based on wavelet multi-resolution decomposition by autocorrelation shell representation and neural networks (multilayer perceptrons, or MLPs) modeling of wavelet coefficients. To minimize the influence of noisy low level coefficients, we applied the practical Bayesian method Automatic Relevance Determination (ARD) model to choose the size of MLPs, which are then trained to provide forecasts. The individual wavelet domain forecasts are recombined to form the accurate overall forecast. The proposed method is tested using Queensland electricity demand data from the Australian National Electricity Market. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
We constructed a BAC library of the model legume Lotus japonicus with a 6-to 7-fold genome coverage. We used vector PCLD04541, which allows direct plant transformation by BACs. The average insert size is 94 kb. Clones were stable in Escherichia coli and Agrobacterium tumefaciens.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
A simple percolation theory-based method for determination of the pore network connectivity using liquid phase adsorption isotherm data combined with a density functional theory (DFT)-based pore size distribution is presented in this article. The liquid phase adsorption experiments have been performed using eight different esters as adsorbates and microporous-mesoporous activated carbons Filtrasorb-400, Norit ROW 0.8 and Norit ROX 0.8 as adsorbents. The density functional theory (DFT)-based pore size distributions of the carbons were obtained using DFT analysis of argon adsorption data. The mean micropore network coordination numbers, Z, of the carbons were determined based on DR characteristic plots and fitted saturation capacities using percolation theory. Based on this method, the critical molecular sizes of the model compounds used in this study were also obtained. The incorporation of percolation concepts in the prediction of multicomponent adsorption equilibria is also investigated, and found to improve the performance of the ideal adsorbed solution theory (IAST) model for the large molecules utilized in this study. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The haploid NK model developed by Kauffman can be extended to diploid genomes and to incorporate gene-by-environment interaction effects in combination with epistasis. To provide the flexibility to include a wide range of forms of gene-by-environment interactions, a target population of environment types (TPE) is defined. The TPE consists of a set of E different environment types, each with their own frequency of occurrence. Each environment type conditions a different NK gene network structure or series of gene effects for a given network structure, providing the framework for defining gene-by-environment interactions. Thus, different NK models can be partially or completely nested within the E environment types of a TPE, giving rise to the E(NK) model for a biological system. With this model it is possible to examine how populations of genotypes evolve in context with properties of the environment that influence the contributions of genes to the fitness values of genotypes. We are using the E(NK) model to investigate how both epistasis and gene-by-environment interactions influence the genetic improvement of quantitative traits by plant breeding strategies applied to agricultural systems. © 2002 Wiley Periodicals, Inc.
Resumo:
In microarray studies, the application of clustering techniques is often used to derive meaningful insights into the data. In the past, hierarchical methods have been the primary clustering tool employed to perform this task. The hierarchical algorithms have been mainly applied heuristically to these cluster analysis problems. Further, a major limitation of these methods is their inability to determine the number of clusters. Thus there is a need for a model-based approach to these. clustering problems. To this end, McLachlan et al. [7] developed a mixture model-based algorithm (EMMIX-GENE) for the clustering of tissue samples. To further investigate the EMMIX-GENE procedure as a model-based -approach, we present a case study involving the application of EMMIX-GENE to the breast cancer data as studied recently in van 't Veer et al. [10]. Our analysis considers the problem of clustering the tissue samples on the basis of the genes which is a non-standard problem because the number of genes greatly exceed the number of tissue samples. We demonstrate how EMMIX-GENE can be useful in reducing the initial set of genes down to a more computationally manageable size. The results from this analysis also emphasise the difficulty associated with the task of separating two tissue groups on the basis of a particular subset of genes. These results also shed light on why supervised methods have such a high misallocation error rate for the breast cancer data.