25 resultados para PM3 semi-empirical method
em Aston University Research Archive
Resumo:
Aluminium (Al) is known to be neurotoxic and has been associated with the aetiology of Alzheimer's Disease. To date, only desferrioxamine (DFO), a trihydroxamic acid siderophore has been used in the clinical environment for the removal of Al from the body. However, this drug is expensive, orally inactive and is associated with many side effects. These studies employed a theoretical approach, with the use of quantum mechanics (QM) via semi-empirical molecular orbital (MO) calculations, and a practical approach using U87-MG glioblastoma cells as a model for evaluating the influence of potential chelators on the passage of aluminium into cells. Preliminary studies involving the Cambridge Structural Database (CSD) identified that Al prefers binding to bidentate ligands in a 3:1 manner, whereby oxygen was the exclusive donating atom. Statistically significant differences in M-O bond lengths when compared to other trivalent metal ions such as Fe3+ were established and used as an acceptance criterion for subsequent MO calculations. Of the semi-empirical methods parameterised for Al, the PM3 Hamiltonian was found to give the most reliable final optimised geometries of simple 3:1 Al complexes. Consequently the PM3 Hamiltonian was used for evaluating the Hf of 3:1 complexes with more complicated ligands. No correlation exists between published stability constants and individual parameters calculated via PM3 optimisations, although investigation of the dicarboxylates reveals a correlation of 0.961 showing promise for affinity prediction of closely related ligands. A simple and inexpensive morin spectrofluorescence assay has been developed and optimised producing results comparable to atomic absorption spectroscopy methods for the quantitative analysis of Al. This assay was used in subsequent in vitro models, initially on E. coli, which indicated that Al inhibits the antimicrobial action of ciprofloxacin, a potent quinolone antibiotic. Ensuing studies using the second model, U87-MG cells, investigated the influence of chelators on the transmembrane transport of Al, identifying 1,2-diethylhydroxypyridin-4-one as a ligand showing greatest potential for chelating Al in the clinical situation. In conclusion, these studies have explored semi-empirical MO Hamiltonians and an in-vitro U87-MG cell line, both as possible methods for predicting effective chelators of Al.
Resumo:
This thesis comprises two main objectives. The first objective involved the stereochemical studies of chiral 4,6-diamino-1-aryl-1,2-dihydro-s-triazines and an investigation on how the different conformations of these stereoisomers may affect their binding affinity to the enzyme dihydrofolate reductase (DHFR). The ortho-substituted 1-aryl-1,2-dihydro-s-triazines were synthesised by the three component method. An ortho-substitution at the C6' position was observed when meta-azidocycloguanil was decomposed in acid. The ortho-substituent restricts free rotation and this gives rise to atropisomerism. Ortho-substituted 4,6-diamino-1-aryl-2-ethyl-1,2-dihydro-2-methyl-s-triazine contains two elements of chirality and therefore exists as four stereoisomers: (S,aR), (R,aS), (R,aR) and (S,aS). The energy barriers to rotation of these compounds were calculated by a semi-empirical molecular orbital program called MOPAC and they were found to be in excess of 23 kcal/mol. The diastereoisomers were resolved and enriched by C18 reversed phase h.p.l.c. Nuclear overhauser effect experiments revealed that (S,aR) and (R,aS) were the more stable pair of stereoisomers and therefore existed as the major component. The minor diastereoisomers showed greater binding affinity for the rat liver DHFR in in vitro assay. The second objective entailed the investigation into the possibility of retaining DHFR inhibitory activity by replacing the classical diamino heterocyclic moiety with an amidinyl group. 4-Benzylamino-3-nitro-N,N-dimethyl-phenylamidine was synthesised in two steps. One of the two phenylamidines indicated weak inhibition against the rat liver DHFR. This weak activity may be due to the failure of the inhibitor molecule to form strong hydrogen bonds with residue Glu-30 at the active site of the enzyme.
Resumo:
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT • The cytotoxic effects of 6-mercaptopurine (6-MP) were found to be due to drug-derived intracellular metabolites (mainly 6-thioguanine nucleotides and to some extent 6-methylmercaptopurine nucleotides) rather than the drug itself. • Current empirical dosing methods for oral 6-MP result in highly variable drug and metabolite concentrations and hence variability in treatment outcome. WHAT THIS STUDY ADDS • The first population pharmacokinetic model has been developed for 6-MP active metabolites in paediatric patients with acute lymphoblastic leukaemia and the potential demographic and genetically controlled factors that could lead to interpatient pharmacokinetic variability among this population have been assessed. • The model shows a large reduction in interindividual variability of pharmacokinetic parameters when body surface area and thiopurine methyltransferase polymorphism are incorporated into the model as covariates. • The developed model offers a more rational dosing approach for 6-MP than the traditional empirical method (based on body surface area) through combining it with pharmacogenetically guided dosing based on thiopurine methyltransferase genotype. AIMS - To investigate the population pharmacokinetics of 6-mercaptopurine (6-MP) active metabolites in paediatric patients with acute lymphoblastic leukaemia (ALL) and examine the effects of various genetic polymorphisms on the disposition of these metabolites. METHODS - Data were collected prospectively from 19 paediatric patients with ALL (n = 75 samples, 150 concentrations) who received 6-MP maintenance chemotherapy (titrated to a target dose of 75 mg m−2 day−1). All patients were genotyped for polymorphisms in three enzymes involved in 6-MP metabolism. Population pharmacokinetic analysis was performed with the nonlinear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance for the active metabolites. RESULTS - The developed model revealed considerable interindividual variability (IIV) in the clearance of 6-MP active metabolites [6-thioguanine nucleotides (6-TGNs) and 6-methylmercaptopurine nucleotides (6-mMPNs)]. Body surface area explained a significant part of 6-TGNs clearance IIV when incorporated in the model (IIV reduced from 69.9 to 29.3%). The most influential covariate examined, however, was thiopurine methyltransferase (TPMT) genotype, which resulted in the greatest reduction in the model's objective function (P < 0.005) when incorporated as a covariate affecting the fractional metabolic transformation of 6-MP into 6-TGNs. The other genetic covariates tested were not statistically significant and therefore were not included in the final model. CONCLUSIONS - The developed pharmacokinetic model (if successful at external validation) would offer a more rational dosing approach for 6-MP than the traditional empirical method since it combines the current practice of using body surface area in 6-MP dosing with a pharmacogenetically guided dosing based on TPMT genotype.
Resumo:
We present a semi-analytical method for dimensioning Reed-Solomon codes for coherent DQPSK systems with laser phase noise and cycle slips. We evaluate the accuracy of our method for a 28 Gbaud system using numerical simulations.
Resumo:
We consider a Cauchy problem for the Laplace equation in a two-dimensional semi-infinite region with a bounded inclusion, i.e. the region is the intersection between a half-plane and the exterior of a bounded closed curve contained in the half-plane. The Cauchy data are given on the unbounded part of the boundary of the region and the aim is to construct the solution on the boundary of the inclusion. In 1989, Kozlov and Maz'ya [10] proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems in bounded domains. We extend their approach to our setting and in each iteration step mixed boundary value problems for the Laplace equation in the semi-infinite region are solved. Well-posedness of these mixed problems are investigated and convergence of the alternating procedure is examined. For the numerical implementation an efficient boundary integral equation method is proposed, based on the indirect variant of the boundary integral equation approach. The mixed problems are reduced to integral equations over the (bounded) boundary of the inclusion. Numerical examples are included showing the feasibility of the proposed method.
Resumo:
In this study, we investigate the problem of reconstruction of a stationary temperature field from given temperature and heat flux on a part of the boundary of a semi-infinite region containing an inclusion. This situation can be modelled as a Cauchy problem for the Laplace operator and it is an ill-posed problem in the sense of Hadamard. We propose and investigate a Landweber-Fridman type iterative method, which preserve the (stationary) heat operator, for the stable reconstruction of the temperature field on the boundary of the inclusion. In each iteration step, mixed boundary value problems for the Laplace operator are solved in the semi-infinite region. Well-posedness of these problems is investigated and convergence of the procedures is discussed. For the numerical implementation of these mixed problems an efficient boundary integral method is proposed which is based on the indirect variant of the boundary integral approach. Using this approach the mixed problems are reduced to integral equations over the (bounded) boundary of the inclusion. Numerical examples are included showing that stable and accurate reconstructions of the temperature field on the boundary of the inclusion can be obtained also in the case of noisy data. These results are compared with those obtained with the alternating iterative method.
Resumo:
In developed countries travel time savings can account for as much as 80% of the overall benefits arising from transport infrastructure and service improvements. In developing countries they are generally ignored in transport project appraisals, notwithstanding their importance. One of the reasons for ignoring these benefits in the developing countries is that there is insufficient empirical evidence to support the conventional models for valuing travel time where work patterns, particularly of the poor, are diverse and it is difficult to distinguish between work and non-work activities. The exclusion of time saving benefits may lead to a bias against investment decisions that benefit the poor and understate the poverty reduction potential of transport investments in Least Developed Countries (LDCs). This is because the poor undertake most travel and transport by walking and headloading on local roads, tracks and paths and improvements of local infrastructure and services bring large time saving benefits for them through modal shifts. The paper reports on an empirical study to develop a methodology for valuing rural travel time savings in the LDCs. Apart from identifying the theoretical and empirical issues in valuing travel time savings in the LDCs, the paper presents and discusses the results of an analysis of data from Bangladesh. Some of the study findings challenge the conventional wisdom concerning the time saving values. The Bangladesh study suggests that the western concept of dividing travel time savings into working and non-working time savings is broadly valid in the developing country context. The study validates the use of preference methods in valuing non-working time saving values. However, stated preference (SP) method is more appropriate than revealed preference (RP) method.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data. In this paper, we propose a semi-oriented radial measure, which permits the presence of variables which can take both negative and positive values. The model is applied to data on a notional effluent processing system to compare the results with those yielded by two alternative methods for dealing with negative data in DEA: The modified slacks-based model suggested by Sharp et al. [Sharp, J.A., Liu, W.B., Meng, W., 2006. A modified slacks-based measure model for data envelopment analysis with ‘natural’ negative outputs and inputs. Journal of Operational Research Society 57 (11) 1–6] and the range directional model developed by Portela et al. [Portela, M.C.A.S., Thanassoulis, E., Simpson, G., 2004. A directional distance approach to deal with negative data in DEA: An application to bank branches. Journal of Operational Research Society 55 (10) 1111–1121]. A further example explores the advantages of using the new model.
Resumo:
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospitalspecific and national DRG casemix rates. Since implementation in 1990, the percentage of each hospitals budget based on hospital specific costs was to decrease, while the percentage based on DRG casemix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospitalspecific cost basis. While all other nonNHS third party payers are currently paying based on DRGs, the adoption of DRG casemix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a twostage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992–1994, using both parametric and nonparametric frontier models. We find evidence that the DRG payment system does appear to have had a positive impact on productivity and technical efficiency of some commonly employed diagnostic technologies in Portugal during this time span.
Resumo:
Measurements of the sea surface obtained by satellite borne radar altimetry are irregularly spaced and contaminated with various modelling and correction errors. The largest source of uncertainty for low Earth orbiting satellites such as ERS-1 and Geosat may be attributed to orbital modelling errors. The empirical correction of such errors is investigated by examination of single and dual satellite crossovers, with a view to identifying the extent of any signal aliasing: either by removal of long wavelength ocean signals or introduction of additional error signals. From these studies, it was concluded that sinusoidal approximation of the dominant one cycle per revolution orbit error over arc lengths of 11,500 km did not remove a significant mesoscale ocean signal. The use of TOPEX/Poseidon dual crossovers with ERS-1 was shown to substantially improve the radial accuracy of ERS-1, except for some absorption of small TOPEX/Poseidon errors. The extraction of marine geoid information is of great interest to the oceanographic community and was the subject of the second half of this thesis. Firstly through determination of regional mean sea surfaces using Geosat data, it was demonstrated that a dataset with 70cm orbit error contamination could produce a marine geoid map which compares to better than 12cm with an accurate regional high resolution gravimetric geoid. This study was then developed into Optimal Fourier Transform Interpolation, a technique capable of analysing complete altimeter datasets for the determination of consistent global high resolution geoid maps. This method exploits the regular nature of ascending and descending data subsets thus making possible the application of fast Fourier transform algorithms. Quantitative assessment of this method was limited by the lack of global ground truth gravity data, but qualitative results indicate good signal recovery from a single 35-day cycle.
Resumo:
This thesis examines the effect of rights issue announcements on stock prices by companies listed on the Kuala Lumpur Stock Exchange (KLSE) between 1987 to 1996. The emphasis is to report whether the KLSE is semi strongly efficient with respect to the announcement of rights issues and to check whether the implications of corporate finance theories on the effect of an event can be supported in the context of an emerging market. Once the effect is established, potential determinants of abnormal returns identified by previous empirical work and corporate financial theory are analysed. By examining 70 companies making clean rights issue announcements, this thesis will hopefully shed light on some important issues in long term corporate financing. Event study analysis is used to check on the efficiency of the Malaysian stock market; while cross-sectional regression analysis is executed to identify possible explanators of the rights issue announcements' effect. To ensure the results presented are not contaminated, econometric and statistical issues raised in both analyses have been taken into account. Given the small amount of empirical research conducted in this part of the world, the results of this study will hopefully be of use to investors, security analysts, corporate financial managements, regulators and policy makers as well as those who are interested in capital market based research of an emerging market. It is found that the Malaysian stock market is not semi strongly efficient since there exists a persistent non-zero abnormal return. This finding is not consistent with the hypothesis that security returns adjust rapidly to reflect new information. It may be possible that the result is influenced by the sample, consisting mainly of below average size companies which tend to be thinly traded. Nevertheless, these issues have been addressed. Another important issue which has emerged from the study is that there is some evidence to suggest that insider trading activity existed in this market. In addition to these findings, when the rights issue announcements' effect is compared to the implications of corporate finance theories in predicting the sign of abnormal returns, the signalling model, asymmetric information model, perfect substitution hypothesis and Scholes' information hypothesis cannot be supported.
Resumo:
This thesis presents the results of a multi-method investigation of employee perceptions of fairness in relation to their career management experiences. Organisational justice theory (OJT) was developed as a theoretical framework and data were gathered via 325 quantitative questionnaires, 20 semi-structured interviews and the analysis of a variety of company documents and materials. The results of the questionnaire survey provided strong support for the salience of employee perceptions of justice in regard to their evaluations of organisational career management (OCM) practices, with statistical support emerging for both an agent-systems and interaction model of organisational justice. The qualitative semi-structured interviews provided more detailed analysis of how fairness was experienced in practice, and confirmed the importance of the OJT constructs of fairness within this career management context. Fairness themes to emerge from this analysis included, equity, needs, voice, bias suppression, consistency, ethicality, respect and feedback drawing on many of the central tenants of distributive, procedural, interpersonal and information justice. For the career management literature there is empirical confirmation of a new theoretical framework for understanding employee evaluations of, and reactions to, OCM practices. For the justice literatures a new contextual domain is explored and confirmed, thus extending further the influence and applicability of the theory. For practitioners a new framework for developing, delivering and evaluating their own OCM policies and systems is presented.
Resumo:
Corporate restructuring is perceived as a challenge to research. Prior studies do not provide conclusive evidence regarding the effects of restructuring. Since there are discernible findings, this research attempts to examine the effects of restructuring events amongst the UK listed firms. The sample firms are listed in the LSE and London AIM stock exchange. Only completed restructuring transactions are included in the study. The time horizon extends from year 1999 to 2003. A three-year floating window is assigned to examine the sample firms. The key enquiry is to scrutinise the ex post effects of restructuring on performance and value measures of firms with contrast to a matched criteria non-restructured sample. A cross sectional study employing logit estimate is undertaken to examine firm characteristics of restructuring samples. Further, additional parameters, i.e. Conditional Volatility and Asymmetry are generated under the GJR-GARCH estimate and reiterated in logit models to capture time-varying heteroscedasticity of the samples. This research incorporates most forms of restructurings, while prior studies have examined certain forms of restructuring. Particularly, these studies have made limited attempts to examine different restructuring events simultaneously. In addition to logit analysis, an event study is adopted to evaluate the announcement effect of restructuring under both the OLS and GJR-GARCH estimate supplementing our prior results. By engaging a composite empirical framework, our estimation method validates a full appreciation of restructuring effect. The study provides evidence that restructurings indicate non-trivial significant positive effect. There are some evidences that the response differs because of the types of restructuring, particularly while event study is applied. The results establish that performance measures, i.e. Operating Profit Margin, Return on Equity, Return on Assets, Growth, Size, Profit Margin and Shareholders' Ownership indicate consistent and significant increase. However, Leverage and Asset Turn Over suggest reasonable influence on restructuring across the sample period. Similarly, value measures, i.e. Abnormal Returns, Return on Equity and Cash Flow Margin suggest sizeable improvement. A notable characteristic seen coherently throughout the analysis is the decreasing proportion of Systematic Risk. Consistent with these findings, Conditional Volatility and Asymmetry exhibit similar trend. The event study analysis suggests that on an average market perceives restructuring favourably and shareholders experience significant and systematic positive gain.
Resumo:
The thesis presents new methodology and algorithms that can be used to analyse and measure the hand tremor and fatigue of surgeons while performing surgery. This will assist them in deriving useful information about their fatigue levels, and make them aware of the changes in their tool point accuracies. This thesis proposes that muscular changes of surgeons, which occur through a day of operating, can be monitored using Electromyography (EMG) signals. The multi-channel EMG signals are measured at different muscles in the upper arm of surgeons. The dependence of EMG signals has been examined to test the hypothesis that EMG signals are coupled with and dependent on each other. The results demonstrated that EMG signals collected from different channels while mimicking an operating posture are independent. Consequently, single channel fatigue analysis has been performed. In measuring hand tremor, a new method for determining the maximum tremor amplitude using Principal Component Analysis (PCA) and a new technique to detrend acceleration signals using Empirical Mode Decomposition algorithm were introduced. This tremor determination method is more representative for surgeons and it is suggested as an alternative fatigue measure. This was combined with the complexity analysis method, and applied to surgically captured data to determine if operating has an effect on a surgeon’s fatigue and tremor levels. It was found that surgical tremor and fatigue are developed throughout a day of operating and that this could be determined based solely on their initial values. Finally, several Nonlinear AutoRegressive with eXogenous inputs (NARX) neural networks were evaluated. The results suggest that it is possible to monitor surgeon tremor variations during surgery from their EMG fatigue measurements.