15 resultados para Conceptual site models

em Aston University Research Archive


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The soil-plant-moisture subsystem is an important component of the hydrological cycle. Over the last 20 or so years a number of computer models of varying complexity have represented this subsystem with differing degrees of success. The aim of this present work has been to improve and extend an existing model. The new model is less site specific thus allowing for the simulation of a wide range of soil types and profiles. Several processes, not included in the original model, are simulated by the inclusion of new algorithms, including: macropore flow; hysteresis and plant growth. Changes have also been made to the infiltration, water uptake and water flow algorithms. Using field data from various sources, regression equations have been derived which relate parameters in the suction-conductivity-moisture content relationships to easily measured soil properties such as particle-size distribution data. Independent tests have been performed on laboratory data produced by Hedges (1989). The parameters found by regression for the suction relationships were then used in equations describing the infiltration and macropore processes. An extensive literature review produced a new model for calculating plant growth from actual transpiration, which was itself partly determined by the root densities and leaf area indices derived by the plant growth model. The new infiltration model uses intensity/duration curves to disaggregate daily rainfall inputs into hourly amounts. The final model has been calibrated and tested against field data, and its performance compared to that of the original model. Simulations have also been carried out to investigate the effects of various parameters on infiltration, macropore flow, actual transpiration and plant growth. Qualitatively comparisons have been made between these results and data given in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 10th anniversary of the workshop Models@run.time was held at the 18th International Conference on Model Driven Engineering Languages and Systems. The workshop took place in the city of Ottawa, Canada, on the 29th of September 2015. The workshop was organized by Sebastian Gtz, Nelly Bencomo, Gordon Blair and Hui Song. Here, we present a summary of the discussions at the workshop and a synopsis of the topics discussed and highlighted during the workshop. The workshop received the award for the best workshop at the MODELS 2015 conference out of 18 workshops in total. The award was based upon the organization, program, web site timing and the feedback provided by the workshop participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce models of heterogeneous systems with finite connectivity defined on random graphs to capture finite-coordination effects on the low-temperature behaviour of finite-dimensional systems. Our models use a description in terms of small deviations of particle coordinates from a set of reference positions, particularly appropriate for the description of low-temperature phenomena. A Born-von Karman-type expansion with random coefficients is used to model effects of frozen heterogeneities. The key quantity appearing in the theoretical description is a full distribution of effective single-site potentials which needs to be determined self-consistently. If microscopic interactions are harmonic, the effective single-site potentials turn out to be harmonic as well, and the distribution of these single-site potentials is equivalent to a distribution of localization lengths used earlier in the description of chemical gels. For structural glasses characterized by frustration and anharmonicities in the microscopic interactions, the distribution of single-site potentials involves anharmonicities of all orders, and both single-well and double-well potentials are observed, the latter with a broad spectrum of barrier heights. The appearance of glassy phases at low temperatures is marked by the appearance of asymmetries in the distribution of single-site potentials, as previously observed for fully connected systems. Double-well potentials with a broad spectrum of barrier heights and asymmetries would give rise to the well-known universal glassy low-temperature anomalies when quantum effects are taken into account. © 2007 IOP Publishing Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foley [J. Opt. Soc. Am. A 11 (1994) 1710] has proposed an influential psychophysical model of masking in which mask components in a contrast gain pool are raised to an exponent before summation and divisive inhibition. We tested this summation rule in experiments in which contrast detection thresholds were measured for a vertical 1 c/deg (or 2 c/deg) sine-wave component in the presence of a 3 c/deg (or 6 c/deg) mask that had either a single component oriented at -45° or a pair of components oriented at ±45°. Contrary to the predictions of Foley's model 3, we found that for masks of moderate contrast and above, threshold elevation was predicted by linear summation of the mask components in the inhibitory stage of the contrast gain pool. We built this feature into two new models, referred to as the early adaptation model and the hybrid model. In the early adaptation model, contrast adaptation controls a threshold-like nonlinearity on the output of otherwise linear pathways that provide the excitatory and inhibitory inputs to a gain control stage. The hybrid model involves nonlinear and nonadaptable routes to excitatory and inhibitory stages as well as an adaptable linear route. With only six free parameters, both models provide excellent fits to the masking and adaptation data of Foley and Chen [Vision Res. 37 (1997) 2779] but unlike Foley and Chen's model, are able to do so with only one adaptation parameter. However, only the hybrid model is able to capture the features of Foley's (1994) pedestal plus orthogonal fixed mask data. We conclude that (1) linear summation of inhibitory components is a feature of contrast masking, and (2) that the main aftereffect of spatial adaptation on contrast increment thresholds can be assigned to a single site. © 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design and implementation of data bases involve, firstly, the formulation of a conceptual data model by systematic analysis of the structure and information requirements of the organisation for which the system is being designed; secondly, the logical mapping of this conceptual model onto the data structure of the target data base management system (DBMS); and thirdly, the physical mapping of this structured model into storage structures of the target DBMS. The accuracy of both the logical and physical mapping determine the performance of the resulting systems. This thesis describes research which develops software tools to facilitate the implementation of data bases. A conceptual model describing the information structure of a hospital is derived using the Entity-Relationship (E-R) approach and this model forms the basis for mapping onto the logical model. Rules are derived for automatically mapping the conceptual model onto relational and CODASYL types of data structures. Further algorithms are developed for partly automating the implementation of these models onto INGRES, MIMER and VAX-11 DBMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing theories of semantic cognition propose models of cognitive processing occurring in a conceptual space, where ‘meaning’ is derived from the spatial relationships between concepts’ mapped locations within the space. Information visualisation is a growing area of research within the field of information retrieval, and methods for presenting database contents visually in the form of spatial data management systems (SDMSs) are being developed. This thesis combined these two areas of research to investigate the benefits associated with employing spatial-semantic mapping (documents represented as objects in two- and three-dimensional virtual environments are proximally mapped dependent on the semantic similarity of their content) as a tool for improving retrieval performance and navigational efficiency when browsing for information within such systems. Positive effects associated with the quality of document mapping were observed; improved retrieval performance and browsing behaviour were witnessed when mapping was optimal. It was also shown using a third dimension for virtual environment (VE) presentation provides sufficient additional information regarding the semantic structure of the environment that performance is increased in comparison to using two-dimensions for mapping. A model that describes the relationship between retrieval performance and browsing behaviour was proposed on the basis of findings. Individual differences were not found to have any observable influence on retrieval performance or browsing behaviour when mapping quality was good. The findings from this work have implications for both cognitive modelling of semantic information, and for designing and testing information visualisation systems. These implications are discussed in the conclusions of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disturbances in electrolyte homeostasis are a frequent adverse side-effect of the administration of aminoglycoside antibiotics such as gentamicin, and the antineoplastic agent cis-platinum. The aims of this work were to further elucidate the site(s) and mechanism(s) by which these drugs may produce disturbances in the renal reabsorption of calcium and magnesium. These investigations were undertaken using a range of in vivo and in vitro techniques and models. Initially, a series of in vivo studies was conducted to delineate aspects of the acute and chronic effects of both drugs on renal electrolyte handling and to select and evaluate an appropriate animal model: subsequent investigations were focused on gentamicin. In a study of the acute and chronic effects of cis-platinum administration, there were pronounced acute changes in a variety of indices of nephrotoxic injury, including electrolyte excretion. Most effects resolved but there were chronic increases in the urinary excretion of calcium and magnesium. The renal response of three strains of rat (Fischer 344, Sprague-Dawley (SD), and Wistar) to a ranges of doses of gentamicin was also investigated. Drug administration produced substantially different responses between strains, in particular marked differences in calcium and magnesium excretion. The results suggested that the SD rat was an appropriately sensitive strain for use in further investigations. Acute infusion of gentamicin in the anaesthetised SD rat produced rapid, substantial increases in the fractional excretion of calcium and magnesium, while sodium and potassium output were unaffected, confirming previous results of similar experiments using F344 rats. Studies using lithium clearance measurements in the anaesthetised SD rat were undertaken to investigate the effects of gentamicin on proximal tubular calcium reabsorption. Lithium clearance was unaffected by acute gentamicin infusion, suggesting that the site of acute gentamicin-induced hypercalciuria may not be located in the proximal tubule. Inhibition of Ca2+ ATPase activity was investigated as a potential mechanism by which calcium reabsorption could be affected after aminoglycoside administration. In vitro, both Ca2+ ATPase and Na+/K+ ATPase activity could be similarly inhibited by the presence of aminoglycosides, in a dose-related manner. Whilst inhibition of Na+/K+ ATPase could be demonstrated biochemically after in vivo administration of gentamicin, there were no concurrent effects on Ca2+ ATPase activity, suggesting that inhibition of Ca2+ ATPase activity is unlikely to be a primary mechanism of aminoglycoside-induced reductions of calcium reabsorption. Histochemical studies could not discern inhibition of either Na+/K+ ATPase or Ca2+ ATPase activity after in vivo administration of gentamicin. Selection of renal cell lines for further investigative in vitro studies on the mechanisms of altered cation reabsorption was considered using MTT (3-(4,5,-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) and Neutral Red cytotoxicity assays. The ability of LLC-PK1 and LLC-RK1 cell lines to correctly rank a series of nephrotoxic compounds with their known nephrotoxic potency in vivo was studied. Using these cell lines grown on semi-permeable inserts, alterations in the paracellular transport of 45Ca was investigated as a possible mechanism by which gentamicin could alter calcium reabsorption in vivo. Short term exposure (I h) of LLC-RK1 cells to gentamicin, via both cell surfaces, resulted in a reduction in paracellular permeability to both transepithelial 3H-mannitol and 45Ca fluxes. When LLC-RK1 cells were exposed via the apical surface only, similar dose-related reductions were seen to those observed when cells were exposed to the drug from both sides. Short-term basal exposure to gentamicin appeared to contribute less to the observed reductions in 3H-mannitol and 45Ca fluxes. Experiments investigating transepithelial movement of 45Ca and 3H-mannitol on LLC-PK1 cells after acute gentamicin exposure were inconclusive. Longer exposure (48 h) to gentamicin caused an increase in the permeability of the monolayer and a consequent increase in transepithelial 45Ca flux in the LLC-RK1 cell line; increases in permeability of LLC-PK1 cells to 45Ca and 3H-mannitol were not apparent under the same conditions. The site and mechanism at which gentamicin, in particular, alters calcium reabsorption cannot be definitively described from these studies. However, indirect evidence from lithium clearance studies suggests that the site of the lesion is unlikely to be located in the proximal tubule. The mechanism by which gentamicin exposure alters calcium reabsorption may be by reducing paracellular permeability to calcium rather than by altering active calcium transport processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The control needed in the management of a project was analysed with particular reference to the unique needs of the construction industry within the context of site management. This was explored further by analysing the various problems facing managers within the overall system and determining to what extent the organisation would benefit from an integrated mangement information system. Integration and management of information within the organisational units and the cycles of events that make up the main sub-system was suggested as the means of achieving this objective. A conceptual model of the flow of information was constructed within the whole process of project management by examining the type of information and documents which are generated for the production cycle of a project. This model was analysed with respect to the site managers' needs and the minimum requirements for an overall integrated system. The most tedious and time-consuming task facing the site manager is the determination of weekly production costs, calculation and preparation of interim certificates and valuation of variations occurring during the production stage and finally the settlement and preparation of supplier and sub-contractors' accounts. These areas where microcomputers could be of most help were identified and a number of packages were designed and implemented for various contractors. The gradual integration of stand-alone packages within the whole of the construction industry is a logical sequence to achieve integration of management system. The methods of doing this were analysed together with the resulting advantages and disadvantages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a causal explanation of formative variables that unpacks and clarifies the generally accepted idea that formative indicators are ‘causes’ of the focal formative variable. In doing this, we explore the recent paper by Diamantopoulos and Temme (AMS Review, 3(3), 160-171, 2013) and show that the latter misunderstand the stance of Lee, Cadogan, and Chamberlain (AMS Review, 3(1), 3-17, 2013; see also Cadogan, Lee, and Chamberlain, AMS Review, 3(1), 38-49, 2013). By drawing on the multiple ways that one can interpret the idea of causality within the MIMIC model, we then demonstrate how the continued defense of the MIMIC model as a tool to validate formative indicators and to identify formative variables in structural models is misguided. We also present unambiguous recommendations on how formative variables can be modelled in lieu of the formative MIMIC model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cross-country pipeline construction project is exposed to an uncertain environment due to its enormous size (physical, manpower requirement and financial value), complexity in design technology and involvement of external factors. These uncertainties can lead to several changes in project scope during the process of project execution. Unless the changes are properly controlled, the time, cost and quality goals of the project may never be achieved. A methodology is proposed for project control through risk analysis, contingency allocation and hierarchical planning models. Risk analysis is carried out through the analytic hierarchy process (AHP) due to the subjective nature of risks in construction projects. The results of risk analysis are used to determine the logical contingency for project control with the application of probability theory. Ultimate project control is carried out by hierarchical planning model which enables decision makers to take vital decisions during the changing environment of the construction period. Goal programming (GP), a multiple criteria decision-making technique, is proposed for model formulation because of its flexibility and priority-base structure. The project is planned hierarchically in three levels—project, work package and activity. GP is applied separately at each level. Decision variables of each model are different planning parameters of the project. In this study, models are formulated from the owner's perspective and its effectiveness in project control is demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to outline a seven-phase simulation conceptual modelling procedure that incorporates existing practice and embeds a process reference model (i.e. SCOR). Design/methodology/approach – An extensive review of the simulation and SCM literature identifies a set of requirements for a domain-specific conceptual modelling procedure. The associated design issues for each requirement are discussed and the utility of SCOR in the process of conceptual modelling is demonstrated using two development cases. Ten key concepts are synthesised and aligned to a general process for conceptual modelling. Further work is outlined to detail, refine and test the procedure with different process reference models in different industrial contexts. Findings - Simulation conceptual modelling is often regarded as the most important yet least understood aspect of a simulation project (Robinson, 2008a). Even today, there has been little research development into guidelines to aid in the creation of a conceptual model. Design issues are discussed for building an ‘effective’ conceptual model and the domain-specific requirements for modelling supply chains are addressed. The ten key concepts are incorporated to aid in describing the supply chain problem (i.e. components and relationships that need to be included in the model), model content (i.e. rules for determining the simplest model boundary and level of detail to implement the model) and model validation. Originality/value – Paper addresses Robinson (2008a) call for research in defining and developing new approaches for conceptual modelling and Manuj et al., (2009) discussion on improving the rigour of simulation studies in SCM. It is expected that more detailed guidelines will yield benefits to both expert (i.e. avert typical modelling failures) and novice modellers (i.e. guided practice; less reliance on hopeful intuition)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, a number of sustainable strategies and polices have been created to protect and preserve our water environments from the impacts of growing communities. The Australian approach, Water Sensitive Urban Design (WSUD), defined as the integration of urban planning and design with the urban water cycle management, has made considerable advances on design guidelines since 2000. WSUD stormwater management systems (e.g. wetlands, bioretentions, porous pavement etc), also known as Best Management Practices (BMPs) or Low Impact Development (LID), are slowly gaining popularity across Australia, the USA and Europe. There have also been significant improvements in how to model the performance of the WSUD technologies (e.g. MUSIC software). However, the implementation issues of these WSUD practices are mainly related to ongoing institutional capacity. Some of the key problems are associated with a limited awareness of urban planners and designers; in general, they have very little knowledge of these systems and their benefits to the urban environments. At the same time, hydrological engineers should have a better understanding of building codes and master plans. The land use regulations are equally as important as the physical site conditions for determining opportunities and constraints for implementing WSUD techniques. There is a need for procedures that can make a better linkage between urban planners and WSUD engineering practices. Thus, this paper aims to present the development of a general framework for incorporating WSUD technologies into the site planning process. The study was applied to lot-scale in the Melbourne region, Australia. Results show the potential space available for fitting WSUD elements, according to building requirements and different types of housing densities. © 2011 WIT Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radio Frequency Identification Technology (RFID) adoption in healthcare settings has the potential to reduce errors, improve patient safety, streamline operational processes and enable the sharing of information throughout supply chains. RFID adoption in the English NHS is limited to isolated pilot studies. Firstly, this study investigates the drivers and inhibitors to RFID adoption in the English NHS from the perspective of the GS1 Healthcare User Group (HUG) tasked with coordinating adoption across private and public sectors. Secondly a conceptual model has been developed and deployed, combining two of foresight’s most popular methods; scenario planning and technology roadmapping. The model addresses the weaknesses of each foresight technique as well as capitalizing on their individual, inherent strengths. Semi structured interviews, scenario planning workshops and a technology roadmapping exercise were conducted with the members of the HUG over an 18-month period. An action research mode of enquiry was utilized with a thematic analysis approach for the identification and discussion of the drivers and inhibitors of RFID adoption. The results of the conceptual model are analysed in comparison to other similar models. There are implications for managers responsible for RFID adoption in both the NHS and its commercial partners, and for foresight practitioners. Managers can leverage the insights gained from identifying the drivers and inhibitors to RFID adoption by making efforts to influence the removal of inhibitors and supporting the continuation of the drivers. The academic contribution of this aspect of the thesis is in the field of RFID adoption in healthcare settings. Drivers and inhibitors to RFID adoption in the English NHS are compared to those found in other settings. The implication for technology foresight practitioners is a proof of concept of a model combining scenario planning and technology roadmapping using a novel process. The academic contribution to the field of technology foresight is the conceptual development of foresight model that combines two popular techniques and then a deployment of the conceptual foresight model in a healthcare setting exploring the future of RFID technology.