984 resultados para numerical prediction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the eigen crack opening displacement (COD) boundary integral equations, a newly developed computational approach is proposed for the analysis of multiple crack problems. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix. The interactions among cracks are dealt with by two parts according to the distances of cracks to the current crack. The strong effects of cracks in adjacent group are treated with the aid of the local Eshelby matrix derived from the traction BIEs in discrete form. While the relatively week effects of cracks in far-field group are treated in the iteration procedures. Numerical examples are provided for the stress intensity factors of multiple cracks, up to several thousands in number, with the proposed approach. By comparing with the analytical solutions in the literature as well as solutions of the dual boundary integral equations, the effectiveness and the efficiencies of the proposed approach are verified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent expansion of prediction markets provides a great opportunity to test the market efficiency hypothesis and the calibration of trader judgements. Using a large database of observed prices, this article studies the calibration of prediction markets prices on sporting events using both nonparametric and parametric methods. While only minor bias can be observed during most of the lifetime of the contracts, the calibration of prices deteriorates very significantly in the last moments of the contracts’ lives. Traders tend to overestimate the probability of the losing team to reverse the situation in the last minutes of the game.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nano silicon is widely used as the essential element of complementary metal–oxide–semiconductor (CMOS) and solar cells. It is recognized that today, large portion of world economy is built on electronics products and related services. Due to the accessible fossil fuel running out quickly, there are increasing numbers of researches on the nano silicon solar cells. The further improvement of higher performance nano silicon components requires characterizing the material properties of nano silicon. Specially, when the manufacturing process scales down to the nano level, the advanced components become more and more sensitive to the various defects induced by the manufacturing process. It is known that defects in mono-crystalline silicon have significant influence on its properties under nanoindentation. However, the cost involved in the practical nanoindentation as well as the complexity of preparing the specimen with controlled defects slow down the further research on mechanical characterization of defected silicon by experiment. Therefore, in current study, the molecular dynamics (MD) simulations are employed to investigate the mono-crystalline silicon properties with different pre-existing defects, especially cavities, under nanoindentation. Parametric studies including specimen size and loading rate, are firstly conducted to optimize computational efficiency. The optimized testing parameters are utilized for all simulation in defects study. Based on the validated model, different pre-existing defects are introduced to the silicon substrate, and then a group of nanoindentation simulations of these defected substrates are carried out. The simulation results are carefully investigated and compared with the perfect Silicon substrate which used as benchmark. It is found that pre-existing cavities in the silicon substrate obviously influence the mechanical properties. Furthermore, pre-existing cavities can absorb part of the strain energy during loading, and then release during unloading, which possibly causes less plastic deformation to the substrate. However, when the pre-existing cavities is close enough to the deformation zone or big enough to exceed the bearable stress of the crystal structure around the spherical cavity, the larger plastic deformation occurs which leads the collapse of the structure. Meanwhile, the influence exerted on the mechanical properties of silicon substrate depends on the location and size of the cavity. Substrate with larger cavity size or closer cavity position to the top surface, usually exhibits larger reduction on Young’s modulus and hardness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To compare measures of fat-free mass (FFM) by three different bioelectrical impedance analysis (BIA) devices and to assess the agreement between three different equations validated in older adult and/or overweight populations. Design: Cross-sectional study. Setting: Orthopaedics ward of Brisbane public hospital, Australia. Participants: Twenty-two overweight, older Australians (72 yr ± 6.4, BMI 34 kg/m2 ± 5.5) with knee osteoarthritis. Measurements: Body composition was measured using three BIA devices: Tanita 300-GS (foot-to-foot), Impedimed DF50 (hand-to-foot) and Impedimed SFB7 (bioelectrical impedance spectroscopy (BIS)). Three equations for predicting FFM were selected based on their ability to be applied to an older adult and/ or overweight population. Impedance values were extracted from the hand-to-foot BIA device and included in the equations to estimate FFM. Results: The mean FFM measured by BIS (57.6 kg ± 9.1) differed significantly from those measured by foot-to-foot (54.6 kg ± 8.7) and hand-to-foot BIA (53.2 kg ± 10.5) (P < 0.001). The mean ± SD FFM predicted by three equations using raw data from hand-to-foot BIA were 54.7 kg ± 8.9, 54.7 kg ± 7.9 and 52.9 kg ± 11.05 respectively. These results did not differ from the FFM predicted by the hand-to-foot device (F = 2.66, P = 0.118). Conclusions: Our results suggest that foot-to-foot and hand-to-foot BIA may be used interchangeably in overweight older adults at the group level but due to the large limits of agreement may lead to unacceptable error in individuals. There was no difference between the three prediction equations however these results should be confirmed within a larger sample and against a reference standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wheel-rail interaction is one of the most important research topics in railway engineering. It includes track vibration, track impact response and safety of the track. Track structure failures caused by impact forces can lead to significant economic loss for track owners through damage to rails and to the sleepers beneath. The wheel-rail impact forces occur because of imperfections on the wheels or rails such as wheel flats, irregular wheel profile, rail corrugation and differences in the height of rails connected at a welded joint. The vehicle speed and static wheel load are important factors of the track design, because they are related to the impact forces under wheel-rail defects. In this paper, a 3-Dimensional finite element model for the study of wheel flat impact is developed by use of the FEA software package ANSYS. The effects of the wheel flat to impact force on sleepers with various speeds and static wheel loads under a critical wheel flat size are investigated. It has found that both wheel-rail impact force and impact force on sleeper induced by wheel flat are varying nonlinearly by increasing the vehicle speed; both impact forces are nonlinearly and monotonically increasing by increasing the static wheel load. The relationships between both of impact forces induced by wheel flat and vehicles speed or static load are important to the track engineers to improve the design and maintenance methods in railway industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical simulation method for the Red Blood Cells’ (RBC) deformation is presented in this study. The two-dimensional RBC membrane is modeled by the spring network, where the elastic stretch/compression energy and the bending energy are considered with the constraint of constant RBC surface area. Smoothed Particle Hydrodynamics (SPH) method is used to solve the Navier-Stokes equation coupled with the Plasma-RBC membrane and Cytoplasm- RBC membrane interaction. To verify the method, the motion of a single RBC is simulated in Poiseuille flow and compared with the results reported earlier. Typical motion and deformation mechanism of the RBC is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The micro-circulation of blood plays an important role in human body by providing oxygen and nutrients to the cells and removing carbon dioxide and wastes from the cells. This process is greatly affected by the rheological properties of the Red Blood Cells (RBCs). Changes in the rheological properties of the RBCs are caused by certain human diseases such as malaria and sickle cell diseases. Therefore it is important to understand the motion and deformation mechanism of RBCs in order to diagnose and treat this kind of diseases. Although, many methods have been developed to explore the behavior of the RBCs in micro-channels, they could not explain the deformation mechanism of the RBCs properly. Recently developed Particle Methods are employed to explain the RBCs’ behavior in micro-channels more comprehensively. The main objective of this study is to critically analyze the present methods, used to model the RBC behavior in micro-channels, in order to develop a computationally efficient particle based model to describe the complete behavior of the RBCs in micro-channels accurately and comprehensively

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To fumigate grain stored in a silo, phosphine gas is distributed by a combination of diffusion and fan-forced advection. This initial study of the problem mainly focuses on the advection, numerically modelled as fluid flow in a porous medium. We find satisfactory agreement between the flow predictions of two Computational Fluid Dynamics packages, Comsol and Fluent. The flow predictions demonstrate that the highest velocity (>0.1 m/s) occurs less than 0.2m from the inlet and reduces drastically over one metre of silo height, with the flow elsewhere less than 0.002 m/s or 1% of the velocity injection. The flow predictions are examined to identify silo regions where phosphine dosage levels are likely to be too low for effective grain fumigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834. However, modern residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of load bearing LSF walls was undertaken using a series of realistic design fire curves developed based on Eurocode parametric curves and Barnett’s BFD curves. It included both full scale fire tests and numerical studies of LSF walls without any insulation, and the recently developed externally insulated composite panels. This paper presents the details of fire tests first, and then the numerical models of tested LSF wall studs. It shows that suitable finite element models can be developed to predict the fire rating of load bearing walls under real fire conditions. The paper also describes the structural and fire performances of externally insulated LSF walls in comparison to the non-insulated walls under real fires, and highlights the effects of standard and real fire curves on fire performance of LSF walls.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.