42 resultados para Network scale-up method


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A study of the hydrodynamics and mass transfer characteristics of a liquid-liquid extraction process in a 450 mm diameter, 4.30 m high Rotating Disc Contactor (R.D.C.) has been undertaken. The literature relating to this type of extractor and the relevant phenomena, such as droplet break-up and coalescence, drop mass transfer and axial mixing has been revjewed. Experiments were performed using the system C1airsol-350-acetone-water and the effects of drop size, drop size-distribution and dispersed phase hold-up on the performance of the R.D.C. established. The results obtained for the two-phase system C1airso1-water have been compared with published correlations: since most of these correlations are based on data obtained from laboratory scale R.D.C.'s, a wide divergence was found. The hydrodynamics data from this study have therefore been correlated to predict the drop size and the dispersed phase hold-up and agreement has been obtained with the experimental data to within +8% for the drop size and +9% for the dispersed phase hold-up. The correlations obtained were modified to include terms involving column dimensions and the data have been correlated with the results obtained from this study together with published data; agreement was generally within +17% for drop size and within +14% for the dispersed phase hold-up. The experimental drop size distributions obtained were in excellent agreement with the upper limit log-normal distributions which should therefore be used in preference to other distribution functions. In the calculation of the overall experimental mass transfer coefficient the mean driving force was determined from the concentration profile along the column using Simpson's Rule and a novel method was developed to calculate the overall theoretical mass transfer coefficient Kca1, involving the drop size distribution diagram to determine the volume percentage of stagnant, circulating and oscillating drops in the sample population. Individual mass transfer coefficients were determined for the corresponding droplet state using different single drop mass transfer models. Kca1 was then calculated as the fractional sum of these individual coefficients and their proportions in the drop sample population. Very good agreement was found between the experimental and theoretical overall mass transfer coefficients. Drop sizes under mass transfer conditions were strongly dependant upon the direction of mass transfer. Drop Sizes in the absence of mass transfer were generally larger than those with solute transfer from the continuous to the dispersed phase, but smaller than those with solute transfer in the opposite direction at corresponding phase flowrates and rotor speed. Under similar operating conditions hold-up was also affected by mass transfer; it was higher when solute transfered from the continuous to the dispersed phase and lower when direction was reversed compared with non-mass transfer operation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research concerns the development of coordination and co-governance within three different regeneration programmes within one Midlands city over the period from 1999 to 2002. The New Labour government, in office since 1997, had an agenda for ‘joining-up’ government, part of which has had considerable impact in the area of regeneration policy. Joining-up government encompasses a set of related activities which can include the coordination of policy-making and service delivery. In regeneration, it also includes a commitment to operate through co-governance. Central government and local and regional organisations have sought to put this idea into practice by using what may be referred to as network management processes. Many characteristics of new policies are designed to address the management of networks. Network management is not new in this area, it has developed at least since the early 1990s with the City Challenge and Single Regeneration Budget (SRB) programmes as a way of encouraging more inclusive and effective regeneration interventions. Network management theory suggests that better management can improve decision-making outcomes in complex networks. The theories and concepts are utilised in three case studies as a way of understanding how and why regeneration attempts demonstrate real advances in inter-organisational working at certain times whilst faltering at others. Current cases are compared to the historical case of the original SRB programme as a method of assessing change. The findings suggest that: The use of network management can be identified at all levels of governance. As previous literature has highlighted, central government is the most important actor regarding network structuring. However, it can be argued that network structuring and game management are both practised by central and local actors; Furthermore, all three of the theoretical perspectives within network management (Instrumental, Institutional and Interactive), have been identified within UK regeneration networks. All may have a role to play with no single perspective likely to succeed on its own. Therefore, all could make an important contribution to the understanding of how groups can be brought together to work jointly; The findings support Klijn’s (1997) assertion that the institutional perspective is dominant for understanding network management processes; Instrumentalism continues on all sides, as the acquisition of resources remains the major driver for partnership activity; The level of interaction appears to be low despite the intentions for interactive decision-making; Overall, network management remains partial. Little attention is paid to the issues of accountability or to the institutional structures which can prevent networks from implementing the policies designed by central government, and/or the regional tier.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper surveys the literature on scale and scope economies in the water and sewerage industry. The magnitude of scale and scope economies determines the cost efficient configuration of any industry. In the case of a regulated sector, reliable estimates of these economies are relevant to inform reform proposals that promote vertical (un)bundling and mergers. The empirical evidence allows some general conclusions. First, there is considerable evidence for the existence of vertical scope economies between upstream water production and distribution. Second, there is only mixed evidence on the existence of (dis)economies of scope between water and sewerage activities. Third, economies of scale exist up to certain output level, and diseconomies of scale arise if the company increases its size beyond this level. However, the optimal scale of utilities also appears to vary considerably between countries. Finally, we briefly consider the implications of our findings for water pricing and point to several directions for necessary future empirical research on the measurement of these economies, and explaining their cross country variation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of hMSCs for allogeneic therapies requiring lot sizes of billions of cells will necessitate large-scale culture techniques such as the expansion of cells on microcarriers in bioreactors. Whilst much research investigating hMSC culture on microcarriers has focused on growth, much less involves their harvesting for passaging or as a step towards cryopreservation and storage. A successful new harvesting method has recently been outlined for cells grown on SoloHill microcarriers in a 5L bioreactor [1]. Here, this new method is set out in detail, harvesting being defined as a two-step process involving cell 'detachment' from the microcarriers' surface followed by the 'separation' of the two entities. The new detachment method is based on theoretical concepts originally developed for secondary nucleation due to agitation. Based on this theory, it is suggested that a short period (here 7min) of intense agitation in the presence of a suitable enzyme should detach the cells from the relatively large microcarriers. In addition, once detached, the cells should not be damaged because they are smaller than the Kolmogorov microscale. Detachment was then successfully achieved for hMSCs from two different donors using microcarrier/cell suspensions up to 100mL in a spinner flask. In both cases, harvesting was completed by separating cells from microcarriers using a Steriflip® vacuum filter. The overall harvesting efficiency was >95% and after harvesting, the cells maintained all the attributes expected of hMSC cells. The underlying theoretical concepts suggest that the method is scalable and this aspect is discussed too. © 2014 The Authors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When machining a large-scale aerospace part, the part is normally located and clamped firmly until a set of features are machined. When the part is released, its size and shape may deform beyond the tolerance limits due to stress release. This paper presents the design of a new fixing method and flexible fixtures that would automatically respond to workpiece deformation during machining. Deformation is inspected and monitored on-line, and part location and orientation can be adjusted timely to ensure follow-up operations are carried out under low stress and with respect to the related datum defined in the design models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.