34 resultados para Runs of homozygosity

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background & Aims: Current models of visceral pain processing derived from metabolic brain imaging techniques fail to differentiate between exogenous (stimulus-dependent) and endogenous (non-stimulus-specific) neural activity. The aim of this study was to determine the spatiotemporal correlates of exogenous neural activity evoked by painful esophageal stimulation. Methods: In 16 healthy subjects (8 men; mean age, 30.2 ± 2.2 years), we recorded magnetoencephalographic responses to 2 runs of 50 painful esophageal electrical stimuli originating from 8 brain subregions. Subsequently, 11 subjects (6 men; mean age, 31.2 ± 1.8 years) had esophageal cortical evoked potentials recorded on a separate occasion by using similar experimental parameters. Results: Earliest cortical activity (P1) was recorded in parallel in the primary/secondary somatosensory cortex and posterior insula (∼85 ms). Significantly later activity was seen in the anterior insula (∼103 ms) and cingulate cortex (∼106 ms; P = .0001). There was no difference between the P1 latency for magnetoencephalography and cortical evoked potential (P = .16); however, neural activity recorded with cortical evoked potential was longer than with magnetoencephalography (P = .001). No sex differences were seen for psychophysical or neurophysiological measures. Conclusions: This study shows that exogenous cortical neural activity evoked by experimental esophageal pain is processed simultaneously in somatosensory and posterior insula regions. Activity in the anterior insula and cingulate - brain regions that process the affective aspects of esophageal pain - occurs significantly later than in the somatosensory regions, and no sex differences were observed with this experimental paradigm. Cortical evoked potential reflects the summation of cortical activity from these brain regions and has sufficient temporal resolution to separate exogenous and endogenous neural activity. © 2005 by the American Gastroenterological Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The imidazotetrazinones are clinically active antitumour agents, temozolomide currently proving successful in the treatment of melanomas and gliomas. The exact nature of the biological processes underlying response are as yet unclear.This thesis attempts to identify the cellular targets important to the cytotoxicity of imidazotetrazinones, to elucidate the pathways by which this damage leads to cell death, and to identify mechanisms by which tumour cells may circumvent this action. The levels of the DNA repair enzymes O6-alkylguanine-DNA-alkyltransferase (O6-AGAT) and 3-methyladenine-DNA-glycosylase (3MAG) have been examined in a range of murine and human cell lines with differential sensitivity to temozolomide. All the cell lines were proficient in 3MAG despite there being 40-fold difference in sensitivity to temozolomide. This suggests that while 3-methyladenine is a major product of temozolomide alkylation of DNA it is unlikely to be a cytotoxic lesion. In contrast, there was a 20-fold variation in O6-AGAT levels and the concentration of this repair enzyme correlated with variations in cytotoxicity. Furthermore, depletion of this enzyme in a resistant, O6-AGAT proficient cell line (Raji), by pre-treatment with the free base O6-methylguanine resulted in 54% sensitisation to the effects of temozolomide. These observations have been extended to 3 glioma cell lines; results that support the view that the cytotoxicity of temozolomide is related to alkylation at the O6-position of guanine and that resistance to this drug is determined by efficient repair of this lesion. It is clear, however, the other factors may influence tumour response since temozolomide showed little differential activity towards 3 established solid murine tumours in vivo, despite different tumour O6-AGAT levels. Unlike mitozolomide, temozolomide is incapable of cross-linking DNA and a mechanism by which O6-methylguanine may exert lethality is unclear. The cytotoxicity of the methyl group may be due to its disruption of DNA-protein interactions, or alternatively cell death may not be a direct result of the alkyl group itself, but manifested by DNA single-strand breaks. Enhanced alkaline elution rates were found for the DNA of Raji cells treated with temozolomide following alkyltransferase depletion, suggesting a relationship between O6-methylguanine and the induction single-strand breaks. Such breaks can activate poly(ADP-ribose) synthetase (ADPRT) an enzyme capable of rapid and lethal depletion of cellular NAD levels. However, at concentrations of temozolomlde relevant in vivo little change in adenine nucleotides was detected in cell lines, although this enzyme would appear important in modulating DNA repair since inhibition of ADPRT potentiated temozolomide cytotoxicity in Raji cells but not O6-AGAT deficient GM892A cells. Cell lines have been reported that are O6-AGAT deficient yet resistant to methylating agents. Thus, resistance to temozolomide may arise not only by removal of the methyl group from the O6-position of guanine, but also from another mechanism involving caffeine-sensitive post-replication repair or mismatch repair activity. A modification of the standard Maxam Gilbert sequencing technique was used to determine the sequence specificity of guanine-N7 alkylation. Temozolomide preferentially alkylated runs of guanines with the intensity of reaction increasing with the number of adjacent guanines in the DNA sequence. Comparable results were obtained with a polymerase-stop assay, although neither technique elucidates the sequence specificity of O6-guanine alkylation. The importance of such specificity to cytotoxicity is uncertain, although guanine-rich sequences are common to the promoter regions of oncogenes. Expression of a plasmid reporter gene under the control of the Ha-ras proto~oncogene promoter was inhibited by alkylation with temozolomide when transfected into cancer cell lines, However, this inhibition did not appear to be related to O6~guanine alkylation and therefore would seem unimportant to the chemotherapeutic activity of temozolomide.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Relevant carbon-based materials, home-made carbon-silica hybrids, commercial activated carbon, and nanostructured multi-walled carbon nanotubes (MWCNT) were tested in the oxidative dehydrogenation of ethylbenzene (EB). Special attention was given to the reaction conditions, using a relatively concentrated EB feed (10 vol.% EB), and limited excess of O2 (O 2:EB = 0.6) in order to work at full oxygen conversion and consequently avoid O2 in the downstream processing and recycle streams. The temperature was varied between 425 and 475 °C, that is about 150-200 °C lower than that of the commercial steam dehydrogenation process. The stability was evaluated from runs of 60 h time on stream. Under the applied reactions conditions, all the carbon-based materials are apparently stable in the first 15 h time on stream. The effect of the gasification/burning was significantly visible only after this period where most of them fully decomposes. The carbon of the hybrids decomposes completely rendering the silica matrix and the activated carbon bed is fully consumed. Nano structured MWCNT is the most stable; the structure resists the demanding reaction conditions showing an EB conversion of ∼30% (but deactivating) with a steady selectivity of ∼80%. The catalyst stability under the ODH reaction conditions is predicted from the combustion apparent activation energies. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A formalism for describing the dynamics of Genetic Algorithms (GAs) using method s from statistical mechanics is applied to the problem of generalization in a perceptron with binary weights. The dynamics are solved for the case where a new batch of training patterns is presented to each population member each generation, which considerably simplifies the calculation. The theory is shown to agree closely to simulations of a real GA averaged over many runs, accurately predicting the mean best solution found. For weak selection and large problem size the difference equations describing the dynamics can be expressed analytically and we find that the effects of noise due to the finite size of each training batch can be removed by increasing the population size appropriately. If this population resizing is used, one can deduce the most computationally efficient size of training batch each generation. For independent patterns this choice also gives the minimum total number of training patterns used. Although using independent patterns is a very inefficient use of training patterns in general, this work may also prove useful for determining the optimum batch size in the case where patterns are recycled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The operator hairpin ahead of the replicase gene in RNA bacteriophage MS2 contains overlapping signals for binding the coat protein and ribosomes. Coat protein binding inhibits further translation of the gene and forms the first step in capsid formation. The hairpin sequence was partially randomized to assess the importance of this structure element for the bacteriophage and to monitor alternative solutions that would evolve on the passaging of mutant phages. The evolutionary reconstruction of the operator failed in the majority of mutants. Instead, a poor imitation developed containing only some of the recognition signals for the coat protein. Three mutants were of particular interest in that they contained double nonsense codons in the lysis reading frame that runs through the operator hairpin. The simultaneous reversion of two stop codons into sense codons has a very low probability of occurring. Therefore the phage solved the problem by deleting the nonsense signals and, in fact, the complete operator, except for the initiation codon of the replicase gene. Several revertants were isolated with activities ranging from 1% to 20% of wild type. The operator, long thought to be a critical regulator, now appears to be a dispensable element. In addition, the results indicate how RNA viruses can be forced to step back to an attenuated form.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - This paper provides a deeper examination of the fundamentals of commonly-used techniques - such as coefficient alpha and factor analysis - in order to more strongly link the techniques used by marketing and social researchers to their underlying psychometric and statistical rationale. Design/methodology approach - A wide-ranging review and synthesis of psychometric and other measurement literature both within and outside the marketing field is used to illuminate and reconsider a number of misconceptions which seem to have evolved in marketing research. Findings - The research finds that marketing scholars have generally concentrated on reporting what are essentially arbitrary figures such as coefficient alpha, without fully understanding what these figures imply. It is argued that, if the link between theory and technique is not clearly understood, use of psychometric measure development tools actually runs the risk of detracting from the validity of the measures rather than enhancing it. Research limitations/implications - The focus on one stage of a particular form of measure development could be seen as rather specialised. The paper also runs the risk of increasing the amount of dogma surrounding measurement, which runs contrary to the spirit of this paper. Practical implications - This paper shows that researchers may need to spend more time interpreting measurement results. Rather than simply referring to precedence, one needs to understand the link between measurement theory and actual technique. Originality/value - This paper presents psychometric measurement and item analysis theory in easily understandable format, and offers an important set of conceptual tools for researchers in many fields. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental tenet of Leader–Member Exchange (LMX) theory is that leaders develop different quality relationships with their employees; however, little research has investigated the impact of LMX differentiation on employee reactions. The current research investigates whether perceptions of LMX variability (the extent to which LMX relationships are perceived to vary within a team) affects employee job satisfaction and wellbeing beyond the effects of personal LMX quality. As LMX variability runs counter to principles of equality and consistency, which are important for maintaining social harmony in groups, it is hypothesized that perceptions of LMX variability will have a negative effect on employee reactions, via its negative impact on perceived team relations. Two samples of employed individuals were used to investigate the hypothesized relationships. In both samples, an individual's perception of LMX variability in their team was negatively related to employee job satisfaction and wellbeing (above the effects of LMX), and this relationship was mediated by reports of relational team conflict.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature on the potential use of liquid ammonia as a solvent for the extraction of aromatic hydrocarbons from mixtures with paraffins, and the application of reflux, has been reviewed. Reference is made to extractors suited to this application. A pilot scale extraction plant was designed comprising a Scm. diameter by 12Scm. high, 50 stage Rotating Disc Contactor with 2 external settlers. Provision was made for operation with, or without, reflux at a pressure of 10 bar and ambient temperature. The solvent recovery unit consisted of an evaporator, compressor and condenser in a refrigeration cycle. Two systems were selected for study, Cumene-n-Heptane-Ammonia and Toluene-Methylcyclohexane-Ammonia. Equlibrium data for the first system was determined experimentally in a specially-designed, equilibrium bomb. A technique was developed to withdraw samples under pressure for analysis by chromatography and titration. The extraction plant was commissioned with a kerosine-water system; detailed operating procedures were developed based on a Hazard and Operability Study. Experimental runs were carried out with both ternary ammonia systems. With the system Toluene-Methylcyclohexane-Ammonia the extraction plant and the solvent recovery facility, operated satisfactorily, and safely,in accordance with the operating procedures. Experimental data gave reasonable agreement with theory. Recommendations are made for further work with plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to design, construct, test and operate a novel circulating fluid bed fast pyrolysis reactor system for production of liquids from biomass. The novelty lies in incorporating an integral char combustor to provide autothermal operation. A reactor design methodology was devised which correlated input parameters to process variables, namely temperature, heat transfer and gas/vapour residence time, for both the char combustor and biomass pyrolyser. From this methodology a CFB reactor was designed with integral char combustion for 10 kg/h biomass throughput. A full-scale cold model of the CFB unit was constructed and tested to derive suitable hydrodynamic relationships and performance constraints. Early difficulties encountered with poor solids circulation and inefficient product recovery were overcome by a series of modifications. A total of 11 runs in a pyrolysis mode were carried out with a maximum total liquids yield of 61.50% wt on a maf biomass basis, obtained at 500°C and with 0.46 s gas/vapour residence time. This could be improved by improved vapour recovery by direct quenching up to an anticipated 75 % wt on a moisture-and-ash-free biomass basis. The reactor provides a very high specific throughput of 1.12 - 1.48 kg/hm2 and the lowest gas-to-feed ratio of 1.3 - 1.9 kg gas/kg feed compared to other fast pyrolysis processes based on pneumatic reactors and has a good scale-up potential. These features should provide significant capital cost reduction. Results to date suggest that the process is limited by the extent of char combustion. Future work will address resizing of the char combustor to increase overall system capacity, improvement in solid separation and substantially better liquid recovery. Extended testing will provide better evaluation of steady state operation and provide data for process simulation and reactor modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method has been constructed for the solution of a wide range of chemical plant simulation models including differential equations and optimization. Double orthogonal collocation on finite elements is applied to convert the model into an NLP problem that is solved either by the VF 13AD package based on successive quadratic programming, or by the GRG2 package, based on the generalized reduced gradient method. This approach is termed simultaneous optimization and solution strategy. The objective functional can contain integral terms. The state and control variables can have time delays. Equalities and inequalities containing state and control variables can be included into the model as well as algebraic equations and inequalities. The maximum number of independent variables is 2. Problems containing 3 independent variables can be transformed into problems having 2 independent variables using finite differencing. The maximum number of NLP variables and constraints is 1500. The method is also suitable for solving ordinary and partial differential equations. The state functions are approximated by a linear combination of Lagrange interpolation polynomials. The control function can either be approximated by a linear combination of Lagrange interpolation polynomials or by a piecewise constant function over finite elements. The number of internal collocation points can vary by finite elements. The residual error is evaluated at arbitrarily chosen equidistant grid-points, thus enabling the user to check the accuracy of the solution between collocation points, where the solution is exact. The solution functions can be tabulated. There is an option to use control vector parameterization to solve optimization problems containing initial value ordinary differential equations. When there are many differential equations or the upper integration limit should be selected optimally then this approach should be used. The portability of the package has been addressed converting the package from V AX FORTRAN 77 into IBM PC FORTRAN 77 and into SUN SPARC 2000 FORTRAN 77. Computer runs have shown that the method can reproduce optimization problems published in the literature. The GRG2 and the VF I 3AD packages, integrated into the optimization package, proved to be robust and reliable. The package contains an executive module, a module performing control vector parameterization and 2 nonlinear problem solver modules, GRG2 and VF I 3AD. There is a stand-alone module that converts the differential-algebraic optimization problem into a nonlinear programming problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this research were to investigate the perforamnce of a rubberwood gasifier and engine with electricity generation and to identify opportunities for the implementation of such a system in Malaysia. The experimental work included the design, fabrication and commissioning of a throated downdraft gasifier in Malaysia. The gasifier was subsequently used to investigate the effect of moisture content, dry wood capacity and particle size of rubberwood on gasifier performance. Additional experiments were also conducted to investigate the influence of two different nozzle numbers and two different throat diameters on tar cracking. A total of 101 runs were completed during the duration of the research. From the experimental data, the average mass balance was found to be 92.65%. The average energy balance over the gasifier to hot raw gas was 98.7%, to cold clean gas was 102.4% and over the complete system was 101.9%. The heat loss from the gasifier was estimated to range from 10-26% of the chemical energy of the feedstock. From the downstream operation, the heat loss was estimated to range from 17-37% of the chemical energy of rubberwood feedstock. The maximum throughput for stable operation was found to be 60-70% of the maximum dry wood capacity. The gasifier was found to have a maximum turndown ratio of 5:1. It is also postulated that the phenomenon of turndown of the gasifier is due to a `bubble theory' occurring at the gasification zone, and this hypothesis is explained. For stable power output, the working range of the engine was found to be 5-33.5 kWe. The thermal efficiency and diesel displacement of the engine was found to be 17-18% and 65-70% respectively. The research also showed that rubberwood gasification in Malaysia is feasible if the price of diesel is above MR35/l and the price of wood is below MR120/tonne.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to design, construct, commission and operate a laboratory scale gasifier system that could be used to investigate the parameters that influence the gasification process. The gasifier is of the open-core variety and is fabricated from 7.5 cm bore quartz glass tubing. Gas cleaning is by a centrifugal contacting scrubber, with the product gas being flared. The system employs an on-line dedicated gas analysis system, monitoring the levels of H2, CO, CO2 and CH4 in the product gas. The gas composition data, as well as the gas flowrate, temperatures throughout the system and pressure data is recorded using a BBC microcomputer based data-logging system. Ten runs have been performed using the system of which six were predominantly commissioning runs. The main emphasis in the commissioning runs was placed on the gas clean-up, the product gas cleaning and the reactor bed temperature measurement. The reaction was observed to occur in a narrow band, of about 3 to 5 particle diameters thick. Initially the fuel was pyrolysed, with the volatiles produced being combusted and providing the energy to drive the process, and then the char product was gasified by reaction with the pyrolysis gases. Normally, the gasifier is operated with reaction zone supported on a bed of char, although it has been operated for short periods without a char bed. At steady state the depth of char remains constant, but by adjusting the air inlet rate it has been shown that the depth of char can be increased or decreased. It has been shown that increasing the depth of the char bed effects some improvement in the product gas quality.