42 resultados para Large Size
em Aston University Research Archive
Resumo:
Advances in the area of industrial metrology have generated new technologies that are capable of measuring components with complex geometry and large dimensions. However, no standard or best-practice guides are available for the majority of such systems. Therefore, these new systems require appropriate testing and verification in order for the users to understand their full potential prior to their deployment in a real manufacturing environment. This is a crucial stage, especially when more than one system can be used for a specific measurement task. In this paper, two relatively new large-volume measurement systems, the mobile spatial co-ordinate measuring system (MScMS) and the indoor global positioning system (iGPS), are reviewed. These two systems utilize different technologies: the MScMS is based on ultrasound and radiofrequency signal transmission and the iGPS uses laser technology. Both systems have components with small dimensions that are distributed around the measuring area to form a network of sensors allowing rapid dimensional measurements to be performed in relation to large-size objects, with typical dimensions of several decametres. The portability, reconfigurability, and ease of installation make these systems attractive for many industries that manufacture large-scale products. In this paper, the major technical aspects of the two systems are briefly described and compared. Initial results of the tests performed to establish the repeatability and reproducibility of these systems are also presented. © IMechE 2009.
Resumo:
The purpose of this work is to gain knowledge on kinetics of biomass decomposition under oxidative atmospheres, mainly examining effect of heating rate on different biomass species. Two sets of experiments are carried out: the first set of experiments is thermal decomposition of four different wood particles, namely aspens, birch, oak and pine under an oxidative atmosphere and analysis with TGA; and the second set is to use large size samples of wood under different heat fluxes in a purpose-built furnace, where the temperature distribution, mass loss and ignition characteristics are recorded and analyzed by a data post-processing system. The experimental data is then used to develop a two-step reactions kinetic scheme with low and high temperature regions while the activation energy for the reactions of the species under different heating rates is calculated. It is found that the activation energy of the second stage reaction for the species with similar constituent fractions tends to converge to a similar value under the high heating rate.
Resumo:
Compared to packings trays are more cost effective column internals because they create a large interfacial area for mass transfer by the interaction of the vapour on the liquid. The tray supports a mass of froth or spray which on most trays (including the most widely used sieve trays) is not in any way controlled. The two important results of the gas/liquid interaction are the tray efficiency and the tray throughput or capacity. After many years of practical experience, both may be predicted by empirical correlations, despite the lack of understanding. It is known that the tray efficiency is in part determined by the liquid flow pattern and the throughput by the liquid froth height which in turn depends on the liquid hold-up and vapour velocity. This thesis describes experimental work on sieve trays in an air-water simulator, 2.44 m in diameter. The liquid flow pattern, for flow rates similar to those used in commercial scale distillation, was observed experimentally by direct observation; by water-cooling, to simulate mass transfer; use of potassium permanganate dye to observe areas of longer residence time; and by height of clear liquid measurements across the tray and in the downcomer using manometers. This work presents experiments designed to evaluate flow control devices proposed to improve the gas liquid interaction and hence improve the tray efficiency and throughput. These are (a) the use of intermediate weirs to redirect liquid to the sides of the tray so as to remove slow moving/stagnant liquid and (b) the use of vapour-directing slots designed to use the vapour to cause liquid to be directed towards the outlet weir thus reducing the liquid hold-up at a given rate i.e. increased throughput. This method also has the advantage of removing slow moving/stagnant liquid. In the experiments using intermediate weirs, which were placed in the centre of the tray. it was found that in general the effect of an intermediate weir depends on the depth of liquid downstream of the weir. If the weir is deeper than the downstream depth it will cause the upstream liquid to be deeper than the downstream liquid. If the weir is not as deep as deep as the downstream depth it may have little or no effect on the upstream depth. An intermediate weir placed at an angle to the direction of flow of liquid increases the liquid towards the sides of the tray without causing an increase in liquid hold-up/ froth height. The maximum proportion of liquid caused to flow sideways by the weir is between 5% and 10%. Experimental work using vapour-directing slots on a rectangular sieve tray has shown that the horizontal momentum that is imparted to the liquid is dependent upon the size of the slot. If too much momentum is transferred to the liquid it causes hydraulic jumps to occur at the mouth of the slot coupled with liquid being entrained, The use of slots also helps to eliminate the hydraulic gradient across sieve trays and provides a more uniform froth height on the tray. By comparing the results obtained of the tray and point efficiencies, it is shown that a slotted tray reduces both values by approximately 10%. This reduction is due to the fact that with a slotted tray the liquid has a reduced residence time Ion the tray coupled also with the fact that large size bubbles are passing through the slots. The effectiveness of using vapour-directing slots on a full circular tray was investigated by using dye to completely colour the biphase. The removal of the dye by clear liquid entering the tray was monitored using an overhead camera. Results obtained show that the slots are successful in their aim of reducing slow moving liquid from the sides of the tray, The net effect of this is an increase in tray efficiency. Measurements of slot vapour-velocity found it to be approximately equal to the hole velocity.
Resumo:
The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.
Resumo:
Ribozymes are short strands of RNA that possess a huge potential as biological tools for studying gene expression and as therapeutic agents to down-regulate undesirable gene expression. Successful application of ribozymes requires delivery to the target site in sufficient amounts for an adequate duration. However, due to their large size and polyanionic character ribozymes are not amenable to transport across biological membranes. In this study a chemically modified ribozyme with enhanced biological stability, targeted against the EGFR mRNA has been evaluated for cellular delivery to cultured glial and neuronal cells with a view to developing treatments for brain tumours. Cellular delivery of free ribozyme was characterised in cultured glial and neuronal cells from the human and rat. Delivery was very limited and time dependent with no consistent difference observed between glial and neuronal cells in both species. Cellular association was largely temperature and energy-dependent with a small component of non-energy dependent association. Further studies showed that ribozyme cellular association was inhibited with self and cross competition with nucleic and non-nucleic acid polyanions indicating the presence of cell surface ribozyme-binding molecules. Trypsin washing experiments further implied that the ribozyme binding surface molecules were protein by nature. Dependence of cellular association on pH indicated that interaction of ribozyme with cell surface molecules was based on ionic interactions. Fluoresence studies indicated that, post cell association, ribozymes were sequestered in sub-cellular vesicles. South-Western blots identified several cell surface proteins which bind to ribozymes and could facilitate cellular association. The limited cellular association observed with free ribozyme required the development and evaluation of polylactide-co-glycolide microspheres incorporating ribozyme for enhanced cellular delivery. Characterisation of microsphere mediated delivery of ribozyme in cultured glial and neuronal cells showed that association increased by 18 to 27-fold in all cell types with no differences observed between cell lines and species. Microsphere mediated delivery was temperature and energy dependent and independent of pH. In order to assess the potential of PLGA micro spheres for the CNS delivery of ribozyme the distribution of ribozyme entrapping microspheres was investigated in rat CNS after intracerebroventricular injection. Distribution studies demonstrated that after 24 hours there was no free ribozyme present in the brain parenchyma, however microsphere entrapped ribozyme was found in the CNS. Microspheres remained in the ventricular system after deposition and passed from the lateral ventricles to the third and fourth ventricle and in the subarachnoid space. Investigation of the influence of microsphere size on the distribution in CNS demonstrated that particles up to 2.5 and O.5f.lm remained in the ventricles around the choroid plexus and ependymal lining.
Resumo:
University students encounter difficulties with academic English because of its vocabulary, phraseology, and variability, and also because academic English differs in many respects from general English, the language which they have experienced before starting their university studies. Although students have been provided with many dictionaries that contain some helpful information on words used in academic English, these dictionaries remain focused on the uses of words in general English. There is therefore a gap in the dictionary market for a dictionary for university students, and this thesis provides a proposal for such a dictionary (called the Dictionary of Academic English; DOAE) in the form of a model which depicts how the dictionary should be designed, compiled, and offered to students. The model draws on state-of-the-art techniques in lexicography, dictionary-use research, and corpus linguistics. The model demanded the creation of a completely new corpus of academic language (Corpus of Academic Journal Articles; CAJA). The main advantages of the corpus are its large size (83.5 million words) and balance. Having access to a large corpus of academic language was essential for a corpus-driven approach to data analysis. A good corpus balance in terms of domains enabled a detailed domain-labelling of senses, patterns, collocates, etc. in the dictionary database, which was then used to tailor the output according to the needs of different types of student. The model proposes an online dictionary that is designed as an online dictionary from the outset. The proposed dictionary is revolutionary in the way it addresses the needs of different types of student. It presents students with a dynamic dictionary whose contents can be customised according to the user's native language, subject of study, variant spelling preferences, and/or visual preferences (e.g. black and white).
Resumo:
We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < 8) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.
Resumo:
We consider a variation of the prototype combinatorial optimization problem known as graph colouring. Our optimization goal is to colour the vertices of a graph with a fixed number of colours, in a way to maximize the number of different colours present in the set of nearest neighbours of each given vertex. This problem, which we pictorially call palette-colouring, has been recently addressed as a basic example of a problem arising in the context of distributed data storage. Even though it has not been proved to be NP-complete, random search algorithms find the problem hard to solve. Heuristics based on a naive belief propagation algorithm are observed to work quite well in certain conditions. In this paper, we build upon the mentioned result, working out the correct belief propagation algorithm, which needs to take into account the many-body nature of the constraints present in this problem. This method improves the naive belief propagation approach at the cost of increased computational effort. We also investigate the emergence of a satisfiable-to-unsatisfiable 'phase transition' as a function of the vertex mean degree, for different ensembles of sparse random graphs in the large size ('thermodynamic') limit.
Resumo:
Code division multiple access (CDMA) in which the spreading code assignment to users contains a random element has recently become a cornerstone of CDMA research. The random element in the construction is particularly attractive as it provides robustness and flexibility in utilizing multiaccess channels, whilst not making significant sacrifices in terms of transmission power. Random codes are generated from some ensemble; here we consider the possibility of combining two standard paradigms, sparsely and densely spread codes, in a single composite code ensemble. The composite code analysis includes a replica symmetric calculation of performance in the large system limit, and investigation of finite systems through a composite belief propagation algorithm. A variety of codes are examined with a focus on the high multi-access interference regime. We demonstrate scenarios both in the large size limit and for finite systems in which the composite code has typical performance exceeding those of sparse and dense codes at equivalent signal to noise ratio.
Resumo:
Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.
Resumo:
ABC (ATP-binding-cassette) transporters carry out many vital functions and are involved in numerous diseases, but study of the structure and function of these proteins is often hampered by their large size and membrane location. Membrane protein purification usually utilizes detergents to solubilize the protein from the membrane, effectively removing it from its native lipid environment. Subsequently, lipids have to be added back and detergent removed to reconstitute the protein into a lipid bilayer. In the present study, we present the application of a new methodology for the extraction and purification of ABC transporters without the use of detergent, instead, using a copolymer, SMA (polystyrene-co-maleic acid). SMA inserts into a bilayer and assembles into discrete particles, essentially solubilizing the membrane into small discs of bilayer encircled by a polymer, termed SMALPs (SMA lipid particles). We show that this polymer can extract several eukaryotic ABC transporters, P-glycoprotein (ABCB1), MRP1 (multidrug-resistance protein 1; ABCC1), MRP4 (ABCC4), ABCG2 and CFTR (cystic fibrosis transmembrane conductance regulator; ABCC7), from a range of different expression systems. The SMALP-encapsulated ABC transporters can be purified by affinity chromatography, and are able to bind ligands comparably with those in native membranes or detergent micelles. A greater degree of purity and enhanced stability is seen compared with detergent solubilization. The present study demonstrates that eukaryotic ABC transporters can be extracted and purified without ever being removed from their lipid bilayer environment, opening up awide range of possibilities for the future study of their structure and function. © The Authors Journal compilation © 2014 Biochemical Society.
Resumo:
Synchronous reluctance motors (SynRMs) are gaining in popularity in industrial drives due to their permanent magnet-free, competitive performance, and robust features. This paper studies the power losses in a 90-kW converter-fed SynRM drive by a calorimetric method in comparison of the traditional input-output method. After the converter and the motor were measured simultaneously in separate chambers, the converter was installed inside the large-size chamber next to the motor and the total drive system losses were obtained using one chamber. The uncertainty of both measurement methods is analyzed and discussed.
Resumo:
The re-entrant flow shop scheduling problem (RFSP) is regarded as a NP-hard problem and attracted the attention of both researchers and industry. Current approach attempts to minimize the makespan of RFSP without considering the interdependency between the resource constraints and the re-entrant probability. This paper proposed Multi-level genetic algorithm (GA) by including the co-related re-entrant possibility and production mode in multi-level chromosome encoding. Repair operator is incorporated in the Multi-level genetic algorithm so as to revise the infeasible solution by resolving the resource conflict. With the objective of minimizing the makespan, Multi-level genetic algorithm (GA) is proposed and ANOVA is used to fine tune the parameter setting of GA. The experiment shows that the proposed approach is more effective to find the near-optimal schedule than the simulated annealing algorithm for both small-size problem and large-size problem. © 2013 Published by Elsevier Ltd.
Resumo:
Circulating low density lipoproteins (LDL) are thought to play a crucial role in the onset and development of atherosclerosis, though the detailed molecular mechanisms responsible for their biological effects remain controversial. The complexity of biomolecules (lipids, glycans and protein) and structural features (isoforms and chemical modifications) found in LDL particles hampers the complete understanding of the mechanism underlying its atherogenicity. For this reason the screening of LDL for features discriminative of a particular pathology in search of biomarkers is of high importance. Three major biomolecule classes (lipids, protein and glycans) in LDL particles were screened using mass spectrometry coupled to liquid chromatography. Dual-polarity screening resulted in good lipidome coverage, identifying over 300 lipid species from 12 lipid sub-classes. Multivariate analysis was used to investigate potential discriminators in the individual lipid sub-classes for different study groups (age, gender, pathology). Additionally, the high protein sequence coverage of ApoB-100 routinely achieved (≥70%) assisted in the search for protein modifications correlating to aging and pathology. The large size and complexity of the datasets required the use of chemometric methods (Partial Least Square-Discriminant Analysis, PLS-DA) for their analysis and for the identification of ions that discriminate between study groups. The peptide profile from enzymatically digested ApoB-100 can be correlated with the high structural complexity of lipids associated with ApoB-100 using exploratory data analysis. In addition, using targeted scanning modes, glycosylation sites within neutral and acidic sugar residues in ApoB-100 are also being explored. Together or individually, knowledge of the profiles and modifications of the major biomolecules in LDL particles will contribute towards an in-depth understanding, will help to map the structural features that contribute to the atherogenicity of LDL, and may allow identification of reliable, pathology-specific biomarkers. This research was supported by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Program (IEF 255076). Work of A. Rudnitskaya was supported by Portuguese Science and Technology Foundation, through the European Social Fund (ESF) and "Programa Operacional Potencial Humano - POPH".
Resumo:
In this paper new architectural approaches that improve the energy efficiency of a cellular radio access network (RAN) are investigated. The aim of the paper is to characterize both the energy consumption ratio (ECR) and the energy consumption gain (ECG) of a cellular RAN when the cell size is reduced for a given user density and service area. The paper affirms that reducing the cell size reduces the cell ECR as desired while increasing the capacity density but the overall RAN energy consumption remains unchanged. In order to trade the increase in capacity density with RAN energy consumption, without degrading the cell capacity provision, a sleep mode is introduced. In sleep mode, cells without active users are powered-off, thereby saving energy. By combining a sleep mode with a small-cell deployment architecture, the paper shows that the ECG can be increased by the factor n = (R/R) while the cell ECR continues to decrease with decreasing cell size.