935 resultados para computational modeling
Resumo:
This paper proposes an analytical Incident Traffic Management framework for freeway incident modeling and traffic re-routing. The proposed framework incorporates an econometric incident duration model and a traffic re-routing optimization module. The incident duration model is used to estimate the expected duration of the incident and thus determine the planning horizon for the re-routing module. The re-routing module is a CTM-based Single Destination System Optimal Dynamic Traffic Assignment model that generates optimal real-time strategies of re-routing freeway traffic to its adjacent arterial network during incidents. The proposed framework has been applied to a case study network including a freeway and its adjacent arterial network in South East Queensland, Australia. The results from different scenarios of freeway demand and incident blockage extent have been analyzed and advantages of the proposed framework are demonstrated.
Resumo:
The major diabetes autoantigen, glutamic acid decarboxylase (GAD65), contains a region of sequence similarity, including six identical residues PEVKEK, to the P2C protein of coxsackie B virus, suggesting that cross-reactivity between coxsackie B virus and GAD65 can initiate autoimmune diabetes. We used the human islet cell mAbs MICA3 and MICA4 to identify the Ab epitopes of GAD65 by screening phage-displayed random peptide libraries. The identified peptide sequences could be mapped to a homology model of the pyridoxal phosphate (PLP) binding domain of GAD65. For MICA3, a surface loop containing the sequence PEVKEK and two adjacent exposed helixes were identified in the PLP binding domain as well as a region of the C terminus of GAD65 that has previously been identified as critical for MICA3 binding. To confirm that the loop containing tile PEVKEK sequence contributes to the MICA3 epitope, this loop was deleted by mutagenesis. This reduced binding of MICA3 by 70%. Peptide sequences selected using MICA4 were rich in basic or hydroxyl-containing amino acids, and the surface of the GAD65 PLP-binding domain surrounding Lys358, which is known to be critical for MICA4 binding, was likewise rich in these amino acids. Also, the two phage most reactive width MICA4 encoded the motif VALxG, and the reverse of this sequence, LAV, was located in this same region. Thus, we have defined the MICA3 and MICA4 epitopes on GAD65 using the combination of phage display, molecular modeling, and mutagenesis and have provided compelling evidence for the involvement of the PEVKEK loop in the MICA3 epitope.
Resumo:
The construction industry is a knowledge-based industry where various actors with diverse expertise create unique information within different phases of a project. The industry has been criticized by researchers and practitioners as being unable to apply newly created knowledge effectively to innovate. The fragmented nature of the construction industry reduces the opportunity of project participants to learn from each other and absorb knowledge. Building Information Modelling (BIM), referring to digital representations of constructed facilities, is a promising technological advance that has been proposed to assist in the sharing of knowledge and creation of linkages between firms. Previous studies have mainly focused on the technical attributes of BIM and there is little evidence on its capability to enhance learning in construction firms. This conceptual paper identifies six ‘functional attributes’ of BIM that act as triggers to stimulate learning: (1) comprehensibility; (2) predictability; (3) accuracy; (4) transparency; (5) mutual understanding and; (6) integration.
Resumo:
Over the last 30 years, numerous research groups have attempted to provide mathematical descriptions of the skin wound healing process. The development of theoretical models of the interlinked processes that underlie the healing mechanism has yielded considerable insight into aspects of this critical phenomenon that remain difficult to investigate empirically. In particular, the mathematical modeling of angiogenesis, i.e., capillary sprout growth, has offered new paradigms for the understanding of this highly complex and crucial step in the healing pathway. With the recent advances in imaging and cell tracking, the time is now ripe for an appraisal of the utility and importance of mathematical modeling in wound healing angiogenesis research. The purpose of this review is to pedagogically elucidate the conceptual principles that have underpinned the development of mathematical descriptions of wound healing angiogenesis, specifically those that have utilized a continuum reaction-transport framework, and highlight the contribution that such models have made toward the advancement of research in this field. We aim to draw attention to the common assumptions made when developing models of this nature, thereby bringing into focus the advantages and limitations of this approach. A deeper integration of mathematical modeling techniques into the practice of wound healing angiogenesis research promises new perspectives for advancing our knowledge in this area. To this end we detail several open problems related to the understanding of wound healing angiogenesis, and outline how these issues could be addressed through closer cross-disciplinary collaboration.
Resumo:
If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.
Resumo:
Prostate cancer is the most commonly diagnosed malignancy in men and advanced disease is incurable. Model systems are a fundamental tool for research and many in vitro models of prostate cancer use cancer cell lines in monoculture. Although these have yielded significant insight they are inherently limited by virtue of their two-dimensional (2D) growth and inability to include the influence of tumour microenvironment. These major limitations can be overcome with the development of newer systems that more faithfully recreate and mimic the complex in vivo multi-cellular, three-dimensional (3D) microenvironment. This article presents the current state of in vitro models for prostate cancer, with particular emphasis on 3D systems and the challenges that remain before their potential to advance our understanding of prostate disease and aid in the development and testing of new therapeutic agents can be realised.
Resumo:
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Resumo:
Computational fluid dynamics (CFD) and particle image velocimetry (PIV) are commonly used techniques to evaluate the flow characteristics in the development stage of blood pumps. CFD technique allows rapid change to pump parameters to optimize the pump performance without having to construct a costly prototype model. These techniques are used in the construction of a bi-ventricular assist device (BVAD) which combines the functions of LVAD and RVAD in a compact unit. The BVAD construction consists of two separate chambers with similar impellers, volutes, inlet and output sections. To achieve the required flow characteristics of an average flow rate of 5 l/min and different pressure heads (left – 100mmHg and right – 20mmHg), the impellers were set at different rotating speeds. From the CFD results, a six-blade impeller design was adopted for the development of the BVAD. It was also observed that the fluid can flow smoothly through the pump with minimum shear stress and area of stagnation which are related to haemolysis and thrombosis. Based on the compatible Reynolds number the flow through the model was calculated for the left and the right pumps. As it was not possible to have both the left and right chambers in the experimental model, the left and right pumps were tested separately.
Resumo:
Nondeclarative memory and novelty processing in the brain is an actively studied field of neuroscience, and reducing neural activity with repetition of a stimulus (repetition suppression) is a commonly observed phenomenon. Recent findings of an opposite trend specifically, rising activity for unfamiliar stimuli—question the generality of repetition suppression and stir debate over the underlying neural mechanisms. This letter introduces a theory and computational model that extend existing theories and suggests that both trends are, in principle, the rising and falling parts of an inverted U-shaped dependence of activity with respect to stimulus novelty that may naturally emerge in a neural network with Hebbian learning and lateral inhibition. We further demonstrate that the proposed model is sufficient for the simulation of dissociable forms of repetition priming using real-world stimuli. The results of our simulation also suggest that the novelty of stimuli used in neuroscientific research must be assessed in a particularly cautious way. The potential importance of the inverted-U in stimulus processing and its relationship to the acquisition of knowledge and competencies in humans is also discussed
Resumo:
Large sized power transformers are important parts of the power supply chain. These very critical networks of engineering assets are an essential base of a nation’s energy resource infrastructure. This research identifies the key factors influencing transformer normal operating conditions and predicts the asset management lifespan. Engineering asset research has developed few lifespan forecasting methods combining real-time monitoring solutions for transformer maintenance and replacement. Utilizing the rich data source from a remote terminal unit (RTU) system for sensor-data driven analysis, this research develops an innovative real-time lifespan forecasting approach applying logistic regression based on the Weibull distribution. The methodology and the implementation prototype are verified using a data series from 161 kV transformers to evaluate the efficiency and accuracy for energy sector applications. The asset stakeholders and suppliers significantly benefit from the real-time power transformer lifespan evaluation for maintenance and replacement decision support.
Resumo:
Statistical comparison of oil samples is an integral part of oil spill identification, which deals with the process of linking an oil spill with its source of origin. In current practice, a frequentist hypothesis test is often used to evaluate evidence in support of a match between a spill and a source sample. As frequentist tests are only able to evaluate evidence against a hypothesis but not in support of it, we argue that this leads to unsound statistical reasoning. Moreover, currently only verbal conclusions on a very coarse scale can be made about the match between two samples, whereas a finer quantitative assessment would often be preferred. To address these issues, we propose a Bayesian predictive approach for evaluating the similarity between the chemical compositions of two oil samples. We derive the underlying statistical model from some basic assumptions on modeling assays in analytical chemistry, and to further facilitate and improve numerical evaluations, we develop analytical expressions for the key elements of Bayesian inference for this model. The approach is illustrated with both simulated and real data and is shown to have appealing properties in comparison with both standard frequentist and Bayesian approaches
Resumo:
The focus of this paper is two-dimensional computational modelling of water flow in unsaturated soils consisting of weakly conductive disconnected inclusions embedded in a highly conductive connected matrix. When the inclusions are small, a two-scale Richards’ equation-based model has been proposed in the literature taking the form of an equation with effective parameters governing the macroscopic flow coupled with a microscopic equation, defined at each point in the macroscopic domain, governing the flow in the inclusions. This paper is devoted to a number of advances in the numerical implementation of this model. Namely, by treating the micro-scale as a two-dimensional problem, our solution approach based on a control volume finite element method can be applied to irregular inclusion geometries, and, if necessary, modified to account for additional phenomena (e.g. imposing the macroscopic gradient on the micro-scale via a linear approximation of the macroscopic variable along the microscopic boundary). This is achieved with the help of an exponential integrator for advancing the solution in time. This time integration method completely avoids generation of the Jacobian matrix of the system and hence eases the computation when solving the two-scale model in a completely coupled manner. Numerical simulations are presented for a two-dimensional infiltration problem.
Resumo:
Structural equation modeling (SEM) is a powerful statistical approach for the testing of networks of direct and indirect theoretical causal relationships in complex data sets with intercorrelated dependent and independent variables. SEM is commonly applied in ecology, but the spatial information commonly found in ecological data remains difficult to model in a SEM framework. Here we propose a simple method for spatially explicit SEM (SE-SEM) based on the analysis of variance/covariance matrices calculated across a range of lag distances. This method provides readily interpretable plots of the change in path coefficients across scale and can be implemented using any standard SEM software package. We demonstrate the application of this method using three studies examining the relationships between environmental factors, plant community structure, nitrogen fixation, and plant competition. By design, these data sets had a spatial component, but were previously analyzed using standard SEM models. Using these data sets, we demonstrate the application of SE-SEM to regularly spaced, irregularly spaced, and ad hoc spatial sampling designs and discuss the increased inferential capability of this approach compared with standard SEM. We provide an R package, sesem, to easily implement spatial structural equation modeling.
Resumo:
A single-generation dataset consisting of 1,730 records from a selection program for high growth rate in giant freshwater prawn (GFP, Macrobrachium rosenbergii) was used to derive prediction equations for meat weight and meat yield. Models were based on body traits [body weight, total length and abdominal width (AW)] and carcass measurements (tail weight and exoskeleton-off weight). Lengths and width were adjusted for the systematic effects of selection line, male morphotypes and female reproductive status, and for the covariables of age at slaughter within sex and body weight. Body and meat weights adjusted for the same effects (except body weight) were used to calculate meat yield (expressed as percentage of tail weight/body weight and exoskeleton-off weight/body weight). The edible meat weight and yield in this GFP population ranged from 12 to 15 g and 37 to 45 %, respectively. The simple (Pearson) correlation coefficients between body traits (body weight, total length and AW) and meat weight were moderate to very high and positive (0.75–0.94), but the correlations between body traits and meat yield were negative (−0.47 to −0.74). There were strong linear positive relationships between measurements of body traits and meat weight, whereas relationships of body traits with meat yield were moderate and negative. Step-wise multiple regression analysis showed that the best model to predict meat weight included all body traits, with a coefficient of determination (R 2) of 0.99 and a correlation between observed and predicted values of meat weight of 0.99. The corresponding figures for meat yield were 0.91 and 0.95, respectively. Body weight or length was the best predictor of meat weight, explaining 91–94 % of observed variance when it was fitted alone in the model. By contrast, tail width explained a lower proportion (69–82 %) of total variance in the single trait models. It is concluded that in practical breeding programs, improvement of meat weight can be easily made through indirect selection for body trait combinations. The improvement of meat yield, albeit being more difficult, is possible by genetic means, with 91 % of the variation in the trait explained by the body and carcass traits examined in this study.
Resumo:
This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.