16 resultados para Non-polarizable Water Models

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper draws attention for the fact that traditional Data Envelopment Analysis (DEA) models do not provide the closest possible targets (or peers) to inefficient units, and presents a procedure to obtain such targets. It focuses on non-oriented efficiency measures (which assume that production units are able to control, and thus change, inputs and outputs simultaneously) both measured in relation to a Free Disposal Hull (FDH) technology and in relation to a convex technology. The approaches developed for finding close targets are applied to a sample of Portuguese bank branches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a two-dimensional water model investigation and development of a multiscale method for the modelling of large systems, such as virus in water or peptide immersed in the solvent. We have implemented a two-dimensional ‘Mercedes Benz’ (MB) or BN2D water model using Molecular Dynamics. We have studied its dynamical and structural properties dependence on the model’s parameters. For the first time we derived formulas to calculate thermodynamic properties of the MB model in the microcanonical (NVE) ensemble. We also derived equations of motion in the isothermal–isobaric (NPT) ensemble. We have analysed the rotational degree of freedom of the model in both ensembles. We have developed and implemented a self-consistent multiscale method, which is able to communicate micro- and macro- scales. This multiscale method assumes, that matter consists of the two phases. One phase is related to micro- and the other to macroscale. We simulate the macro scale using Landau Lifshitz-Fluctuating Hydrodynamics, while we describe the microscale using Molecular Dynamics. We have demonstrated that the communication between the disparate scales is possible without introduction of fictitious interface or approximations which reduce the accuracy of the information exchange between the scales. We have investigated control parameters, which were introduced to control the contribution of each phases to the matter behaviour. We have shown, that microscales inherit dynamical properties of the macroscales and vice versa, depending on the concentration of each phase. We have shown, that Radial Distribution Function is not altered and velocity autocorrelation functions are gradually transformed, from Molecular Dynamics to Fluctuating Hydrodynamics description, when phase balance is changed. In this work we test our multiscale method for the liquid argon, BN2D and SPC/E water models. For the SPC/E water model we investigate microscale fluctuations which are computed using advanced mapping technique of the small scales to the large scales, which was developed by Voulgarakisand et. al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the dynamics of a growing crystalline facet where the growth mechanism is controlled by the geometry of the local curvature. A continuum model, in (2+1) dimensions, is developed in analogy with the Kardar-Parisi-Zhang (KPZ) model is considered for the purpose. Following standard coarse graining procedures, it is shown that in the large time, long distance limit, the continuum model predicts a curvature independent KPZ phase, thereby suppressing all explicit effects of curvature and local pinning in the system, in the "perturbative" limit. A direct numerical integration of this growth equation, in 1+1 dimensions, supports this observation below a critical parametric range, above which generic instabilities, in the form of isolated pillared structures lead to deviations from standard scaling behaviour. Possibilities of controlling this instability by introducing statistically "irrelevant" (in the sense of renormalisation groups) higher ordered nonlinearities have also been discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary aim of this research has been the investigation of the role of water structuring effects in the widely different extents of irritancy displayed by certain antibiotics. The compounds involved were members of the Lincomycin group of antibiotics. The aqueous solution behaviour of these co~pounds was studied using techniques such as vapour pressure osmometry end differential scanning calorimetry (D.S.C.). The effects of the antibiotics on water structure in hydrogel membrane preparations In which the equilibrium water content (E.W.C.) and constituent amounts of freezing and non-freezing water ware varied were also investigated using D.S.C. The permeability of water swollen hydrogel preparations to aqueous antibiotic solutions as well as other solutes were studied. A series of hydrogel preparations into which the antibiotics had been incorporated during polymerisation were developed and used in studies of the effects of the antibiotics end their water structure modifications on the permeation of a range of solutes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to synthesise a series of hydrophilic derivatives of cis-1,2-dihydroxy-3,5-cyclohexadiene (cis-DHCD) and copolymerise them with 2-hydroxyethyl methacrylate (HEMA), to produce a completely new range of hydrogel materials. It is theorised that hydrogels incorporating such derivatives of cis-DHCD will exhibit good strength and elasticity in addition to good water binding ability. The synthesis of derivatives was attempted by both enzymatic and chemical methods. Enzyme synthesis involved the transesterification of cis-DHCD with a number of trichloro and trifluoroethyl esters using the enzyme lipase porcine pancreas to catalyse the reaction in organic solvent. Cyclohexanol was used in initial studies to assess the viability of enzyme catalysed reactions. Chemical synthesis involved the epoxidation of a number of unsaturated carboxylic acids and the subsequent reaction of these epoxy acids with cis-DHCD in DCC/DMAP catalysed esterifications. The silylation of cis-DHCD using TBDCS and BSA was also studied. The rate of aromatisation of cis-DHCD at room temperature was studied in order to assess its stability and 1H NMR studies were also undertaken to determine the conformations adopted by derivatives of cis-DHCD. The copolymerisation of diepoxybutanoate, diepoxyundecanoate, dibutenoate and silyl protected derivatives of cis-DHCD with HEMA, to produce a new group of hydrogels was investigated. The EWC and mechanical properties of these hydrogels were measured and DSC was used to determine the amount of freezing and non-freezing water in the membranes. The effect on EWC of opening the epoxide rings of the comonomers was also investigated

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background - Vaccine development in the post-genomic era often begins with the in silico screening of genome information, with the most probable protective antigens being predicted rather than requiring causative microorganisms to be grown. Despite the obvious advantages of this approach – such as speed and cost efficiency – its success remains dependent on the accuracy of antigen prediction. Most approaches use sequence alignment to identify antigens. This is problematic for several reasons. Some proteins lack obvious sequence similarity, although they may share similar structures and biological properties. The antigenicity of a sequence may be encoded in a subtle and recondite manner not amendable to direct identification by sequence alignment. The discovery of truly novel antigens will be frustrated by their lack of similarity to antigens of known provenance. To overcome the limitations of alignment-dependent methods, we propose a new alignment-free approach for antigen prediction, which is based on auto cross covariance (ACC) transformation of protein sequences into uniform vectors of principal amino acid properties. Results - Bacterial, viral and tumour protein datasets were used to derive models for prediction of whole protein antigenicity. Every set consisted of 100 known antigens and 100 non-antigens. The derived models were tested by internal leave-one-out cross-validation and external validation using test sets. An additional five training sets for each class of antigens were used to test the stability of the discrimination between antigens and non-antigens. The models performed well in both validations showing prediction accuracy of 70% to 89%. The models were implemented in a server, which we call VaxiJen. Conclusion - VaxiJen is the first server for alignment-independent prediction of protective antigens. It was developed to allow antigen classification solely based on the physicochemical properties of proteins without recourse to sequence alignment. The server can be used on its own or in combination with alignment-based prediction methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We survey articles covering how hedge fund returns are explained, using largely non-linear multifactor models that examine the non-linear pay-offs and exposures of hedge funds. We provide an integrated view of the implicit factor and statistical factor models that are largely able to explain the hedge fund return-generating process. We present their evolution through time by discussing pioneering studies that made a significant contribution to knowledge, and also recent innovative studies that examine hedge fund exposures using advanced econometric methods. This is the first review that analyzes very recent studies that explain a large part of hedge fund variation. We conclude by presenting some gaps for future research.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).