868 resultados para Hazard-Based Models
Resumo:
Designing the smart grid requires combining varied models. As their number increases, so does the complexity of the software. Having a well thought architecture for the software then becomes crucial. This paper presents MODAM, a framework designed to combine agent-based models in a flexible and extensible manner, using well known software engineering design solutions (OSGi specification [1] and Eclipse plugins [2]). Details on how to build a modular agent-based model for the smart grid are given in this paper, illustrated by an example for a small network.
Resumo:
Topic modelling has been widely used in the fields of information retrieval, text mining, machine learning, etc. In this paper, we propose a novel model, Pattern Enhanced Topic Model (PETM), which makes improvements to topic modelling by semantically representing topics with discriminative patterns, and also makes innovative contributions to information filtering by utilising the proposed PETM to determine document relevance based on topics distribution and maximum matched patterns proposed in this paper. Extensive experiments are conducted to evaluate the effectiveness of PETM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
Modern cancer research requires physiological, three-dimensional (3-D) cell culture platforms, wherein the physical and chemical characteristics of the extracellular matrix (ECM) can be modified. In this study, gelatine methacrylamide (GelMA)-based hydrogels were characterized and established as in vitro and in vivo spheroid-based models for ovarian cancer, reflecting the advanced disease stage of patients, with accumulation of multicellular spheroids in the tumour fluid (ascites). Polymer concentration (2.5-7% w/v) strongly influenced hydrogel stiffness (0.5±0.2kPa to 9.0±1.8kPa) but had little effect on solute diffusion. The diffusion coefficient of 70kDa fluorescein isothiocyanate (FITC)-labelled dextran in 7% GelMA-based hydrogels was only 2.3 times slower compared to water. Hydrogels of medium concentration (5% w/v GelMA) and stiffness (3.4kPa) allowed spheroid formation and high proliferation and metabolic rates. The inhibition of matrix metalloproteinases and consequently ECM degradability reduced spheroid formation and proliferation rates. The incorporation of the ECM components laminin-411 and hyaluronic acid further stimulated spheroid growth within GelMA-based hydrogels. The feasibility of pre-cultured GelMA-based hydrogels as spheroid carriers within an ovarian cancer animal model was proven and led to tumour development and metastasis. These tumours were sensitive to treatment with the anti-cancer drug paclitaxel, but not the integrin antagonist ATN-161. While paclitaxel and its combination with ATN-161 resulted in a treatment response of 33-37.8%, ATN-161 alone had no effect on tumour growth and peritoneal spread. The semi-synthetic biomaterial GelMA combines relevant natural cues with tunable properties, providing an alternative, bioengineered 3-D cancer cell culture in in vitro and in vivo model systems.
Resumo:
We construct a two-scale mathematical model for modern, high-rate LiFePO4cathodes. We attempt to validate against experimental data using two forms of the phase-field model developed recently to represent the concentration of Li+ in nano-sized LiFePO4crystals. We also compare this with the shrinking-core based model we developed previously. Validating against high-rate experimental data, in which electronic and electrolytic resistances have been reduced is an excellent test of the validity of the crystal-scale model used to represent the phase-change that may occur in LiFePO4material. We obtain poor fits with the shrinking-core based model, even with fitting based on “effective” parameter values. Surprisingly, using the more sophisticated phase-field models on the crystal-scale results in poorer fits, though a significant parameter regime could not be investigated due to numerical difficulties. Separate to the fits obtained, using phase-field based models embedded in a two-scale cathodic model results in “many-particle” effects consistent with those reported recently.
Resumo:
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an ‘integrated database’ is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts.
Resumo:
Process improvement and innovation are risky endeavors, like swimming in unknown waters. In this chapter, I will discuss how process innovation through BPM can benefit from Research-as-a-Service, that is, from the application of research concepts in the processes of BPM projects. A further subject will be how innovations can be converted from confidence-based to evidence-based models due to affordances of digital infrastructures such as large-scale enterprise soft-ware or social media. I will introduce the relevant concepts, provide illustrations for digital capabilities that allow for innovation, and share a number of key takeaway lessons for how organizations can innovate on the basis of digital opportunities and principles of evidence-based BPM: the foundation of all process decisions in facts rather than fiction.
Resumo:
As a new research method supplementing the existing qualitative and quantitative approaches, agent-based modelling and simulation (ABMS) may fit well within the entrepreneurship field because the core concepts and basic premises of entrepreneurship coincide with the characteristics of ABMS (McKelvey, 2004; Yang & Chandra, 2013). Agent-based simulation is a simulation method based on agent-based models. The agentbased models are composed of heterogeneous agents and their behavioural rules. By repeatedly carrying out agent-based simulations on a computer, the simulations reproduce each agent’s behaviour, their interactive process, and the emerging macroscopic phenomenon according to the flow of time. Using agent-based simulations, researchers may investigate temporal or dynamic effects of each agent’s behaviours.
Resumo:
Bone metastasis is a complication that occurs in 80 % of women with advanced breast cancer. Despite the prevalence of bone metastatic disease, the avenues for its clinical management are still restricted to palliative treatment options. In fact, the underlying mechanisms of breast cancer osteotropism have not yet been fully elucidated due to a lack of suitable in vivo models that are able to recapitulate the human disease. In this work, we review the current transplantation-based models to investigate breast cancer-induced bone metastasis and delineate the strengths and limitations of the use of different grafting techniques, tissue sources, and hosts. We further show that humanized xenograft models incorporating human cells or tissue grafts at the primary tumor site or the metastatic site mimic more closely the human disease. Tissue-engineered constructs are emerging as a reproducible alternative to recapitulate functional humanized tissues in these murine models. The development of advanced humanized animal models may provide better platforms to investigate the mutual interactions between human cancer cells and their microenvironment and ultimately improve the translation of preclinical drug trials to the clinic.
Resumo:
In this article, we describe and compare two individual-based models constructed to investigate how genetic factors influence the development of phosphine resistance in lesser grain borer (R. dominica). One model is based on the simplifying assumption that resistance is conferred by alleles at a single locus, while the other is based on the more realistic assumption that resistance is conferred by alleles at two separate loci. We simulated the population dynamic of R. dominica in the absence of phosphine fumigation, and under high and low dose phosphine treatments, and found important differences between the predictions of the two models in all three cases. In the absence of fumigation, starting from the same initial frequencies of genotypes, the two models tended to different stable frequencies, although both reached Hardy-Weinberg equilibrium. The one-locus model exaggerated the equilibrium proportion of strongly resistant beetles by 3.6 times, compared to the aggregated predictions of the two-locus model. Under a low dose treatment the one-locus model overestimated the proportion of strongly resistant individuals within the population and underestimated the total population numbers compared to the two-locus model. These results show the importance of basing resistance evolution models on realistic genetics and that using oversimplified one-locus models to develop pest control strategies runs the risk of not correctly identifying tactics to minimise the incidence of pest infestation.
Resumo:
Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.
Resumo:
A health-monitoring and life-estimation strategy for composite rotor blades is developed in this work. The cross-sectional stiffness reduction obtained by physics-based models is expressed as a function of the life of the structure using a recent phenomenological damage model. This stiffness reduction is further used to study the behavior of measurable system parameters such as blade deflections, loads, and strains of a composite rotor blade in static analysis and forward flight. The simulated measurements are obtained using an aeroelastic analysis of the composite rotor blade based on the finite element in space and time with physics-based damage modes that are then linked to the life consumption of the blade. The model-based measurements are contaminated with noise to simulate real data. Genetic fuzzy systems are developed for global online prediction of physical damage and life consumption using displacement- and force-based measurement deviations between damaged and undamaged conditions. Furthermore, local online prediction of physical damage and life consumption is done using strains measured along the blade length. It is observed that the life consumption in the matrix-cracking zone is about 12-15% and life consumption in debonding/delamination zone is about 45-55% of the total life of the blade. It is also observed that the success rate of the genetic fuzzy systems depends upon the number of measurements, type of measurements and training, and the testing noise level. The genetic fuzzy systems work quite well with noisy data and are recommended for online structural health monitoring of composite helicopter rotor blades.
Resumo:
This paper deals with the quasi-static and dynamic mechanical analysis of montmorillonite filled polypropylene composites. Nanocomposites were prepared by blending montmorillonite (nanoclay) varying from 3 to 9% by weight with polypropylene. The dynamic mechanical properties such as storage modulus, loss modulus and mechanical loss factor of PP and nano-composites were investigated by varying temperature and frequencies. Results showed better mechanical and thermomechanical properties at higher concentration of nanoclay. Regression-based models through design of experiments (DOE) were developed to find the storage modulus and compared with theoretical models and DOE-based models.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.