904 resultados para Optimisation of methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Supplemented by changes adopted at the annual meetings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power required to operate large gyratory mills often exceeds 10 MW. Hence, optimisation of the power consumption will have a significant impact on the overall economic performance and environmental impact of the mineral processing plant. In most of the published models of tumbling mills (e.g. [Morrell, S., 1996. Power draw of wet tumbling mills and its relationship to charge dynamics, Part 2: An empirical approach to modelling of mill power draw. Trans. Inst. Mining Metall. (Section C: Mineral Processing Ext. Metall.) 105, C54-C62. Austin, L.G., 1990. A mill power equation for SAG mills. Miner. Metall. Process. 57-62]), the effect of lifter design and its interaction with mill speed and filling are not incorporated. Recent experience suggests that there is an opportunity for improving grinding efficiency by choosing the appropriate combination of these variables. However, it is difficult to experimentally determine the interactions of these variables in a full scale mill. Although some work has recently been published using DEM simulations, it was basically. limited to 2D. The discrete element code, Particle Flow Code 3D (PFC3D), has been used in this work to model the effects of lifter height (525 cm) and mill speed (50-90% of critical) on the power draw and frequency distribution of specific energy (J/kg) of normal impacts in a 5 m diameter autogenous (AG) mill. It was found that the distribution of the impact energy is affected by the number of lifters, lifter height, mill speed and mill filling. Interactions of lifter design, mill speed and mill filling are demonstrated through three dimensional distinct element methods (3D DEM) modelling. The intensity of the induced stresses (shear and normal) on lifters, and hence the lifter wear, is also simulated. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The successful development and optimisation of optically-driven micromachines will be greatly enhanced by the ability to computationally model the optical forces and torques applied to such devices. In principle, this can be done by calculating the light-scattering properties of such devices. However, while fast methods exist for scattering calculations for spheres and axisymmetric particles, optically-driven micromachines will almost always be more geometrically complex. Fortunately, such micromachines will typically possess a high degree of symmetry, typically discrete rotational symmetry. Many current designs for optically-driven micromachines are also mirror-symmetric about a plane. We show how such symmetries can be used to reduce the computational time required by orders of magnitude. Similar improvements are also possible for other highly-symmetric objects such as crystals. We demonstrate the efficacy of such methods by modelling the optical trapping of a cube, and show that even simple shapes can function as optically-driven micromachines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investment in mining projects, like most business investment, is susceptible to risk and uncertainty. The ability to effectively identify, assess and manage risk may enable strategic investments to be sheltered and operations to perform closer to their potential. In mining, geological uncertainty is seen as the major contributor to not meeting project expectations. The need to assess and manage geological risk for project valuation and decision-making translates to the need to assess and manage risk in any pertinent parameter of open pit design and production scheduling. This is achieved by taking geological uncertainty into account in the mine optimisation process. This thesis develops methods that enable geological uncertainty to be effectively modelled and the resulting risk in long-term production scheduling to be quantified and managed. One of the main accomplishments of this thesis is the development of a new, risk-based method for the optimisation of long-term production scheduling. In addition to maximising economic returns, the new method minimises the risk of deviating from production forecasts, given the understanding of the orebody. This ability represents a major advance in the risk management of open pit mining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to examine consumer emotions and the social science and observation measures that can be utilised to capture the emotional experiences of consumers. The paper is not setting out to solve the theoretical debate surrounding emotion research, rather to provide an assessment of methodological options available to researchers to aid their investigation into both the structure and content of the consumer emotional experience, acknowledging both the conscious and subconscious elements of that experience. Design/methodology/approach - A review of a wide range of prior research from the fields of marketing, consumer behaviour, psychology and neuroscience are examined to identify the different observation methods available to marketing researchers in the study of consumer emotion. This review also considers the self report measures available to researchers and identifies the main theoretical debates concerning emotion to provide a comprehensive overview of the issues surrounding the capture of emotional responses in a marketing context and to highlight the benefits that observation methods offer this area of research. Findings - This paper evaluates three observation methods and four widely used self report measures of emotion used in a marketing context. Whilst it is recognised that marketers have shown preference for the use of self report measures in prior research, mainly due to ease of implementation, it is posited that the benefits of observation methodology and the wealth of data that can be obtained using such methods can compliment prior research. In addition, the use of observation methods cannot only enhance our understanding of the consumer emotion experience but also enable us to collaborate with researchers from other fields in order to make progress in understanding emotion. Originality/value - This paper brings perspectives and methods together to provide an up to date consideration of emotion research for marketers. In order to generate valuable research in this area there is an identified need for discussion and implementation of the observation techniques available to marketing researchers working in this field. An evaluation of a variety of methods is undertaken as a point to start discussion or consideration of different observation techniques and how they can be utilised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents details on both theoretical and experimental aspects of UV written fibre gratings. The main body of the thesis deals with the design, fabrication and testing of telecommunication optical fibre grating devices, but also an accurate theoretical analysis of intra-core fibre gratings is presented. Since more than a decade, fibre gratings have been extensively used in the telecommunication field (as filters, dispersion compensators, and add/drop multiplexers for instance). Gratings for telecommunication should conform to very high fabrication standards as the presence of any imperfection raises the noise level in the transmission system compromising its ability of transmitting intelligible sequence of bits to the receiver. Strong side lobes suppression and high and sharp reflection profile are then necessary characteristics. A fundamental part of the theoretical and experimental work reported in this thesis is about apodisation. The physical principle of apodisation is introduced and a number of apodisation techniques, experimental results and numerical optimisation of the shading functions and all the practical parameters involved in the fabrication are detailed. The measurement of chromatic dispersion in fibres and FBGs is detailed and an estimation of its accuracy is given. An overview on the possible methods that can be implemented for the fabrication of tunable fibre gratings is given before detailing a new dispersion compensator device based on the action of a distributed strain onto a linearly chirped FBG. It is shown that tuning of second and third order dispersion of the grating can be obtained by the use of a specially designed multipoint bending rig. Experiments on the recompression of optical pulses travelling long distances are detailed for 10 Gb/s and 40 Gb/s. The characterisation of a new kind of double section LPG fabricated on a metal-clad coated fibre is reported. The fabrication of the device is made easier by directly writing the grating through the metal coating. This device may be used to overcome the recoating problems associated with standard LPGs written in step-index fibre. Also, it can be used as a sensor for simultaneous measurements of temperature and surrounding medium refractive index.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a theoretical investigation of the application of advanced modelling formats in high-speed fibre lightwave systems. The first part of this work focuses on numerical optimisation of dense wavelength division multiplexing (DWDM) system design. We employ advanced spectral domain filtering techniques and carrier pulse reshaping. We then apply these optimisation methods to investigate spectral and temporal domain characteristics of advanced modulation formats in fibre optic telecommunication systems. Next we investigate numerical methods used in detecting and measuring the system performance of advanced modulation formats. We then numerically study the combination of return-to-zero differential phase-shift keying (RZ-DPSK) with advanced photonic devices. Finally we analyse the dispersion management of Nx40 Gbit/s RZ-DPSK transmission applied to a commercial terrestrial lightwave system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis various mathematical methods of studying the transient and dynamic stabiIity of practical power systems are presented. Certain long established methods are reviewed and refinements of some proposed. New methods are presented which remove some of the difficulties encountered in applying the powerful stability theories based on the concepts of Liapunov. Chapter 1 is concerned with numerical solution of the transient stability problem. Following a review and comparison of synchronous machine models the superiority of a particular model from the point of view of combined computing time and accuracy is demonstrated. A digital computer program incorporating all the synchronous machine models discussed, and an induction machine model, is described and results of a practical multi-machine transient stability study are presented. Chapter 2 reviews certain concepts and theorems due to Liapunov. In Chapter 3 transient stability regions of single, two and multi~machine systems are investigated through the use of energy type Liapunov functions. The treatment removes several mathematical difficulties encountered in earlier applications of the method. In Chapter 4 a simple criterion for the steady state stability of a multi-machine system is developed and compared with established criteria and a state space approach. In Chapters 5, 6 and 7 dynamic stability and small signal dynamic response are studied through a state space representation of the system. In Chapter 5 the state space equations are derived for single machine systems. An example is provided in which the dynamic stability limit curves are plotted for various synchronous machine representations. In Chapter 6 the state space approach is extended to multi~machine systems. To draw conclusions concerning dynamic stability or dynamic response the system eigenvalues must be properly interpreted, and a discussion concerning correct interpretation is included. Chapter 7 presents a discussion of the optimisation of power system small sjgnal performance through the use of Liapunov functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis was concerned with investigating methods of improving the IOP pulse’s potential as a measure of clinical utility. There were three principal sections to the work. 1. Optimisation of measurement and analysis of the IOP pulse. A literature review, covering the years 1960 – 2002 and other relevant scientific publications, provided a knowledge base on the IOP pulse. Initial studies investigated suitable instrumentation and measurement techniques. Fourier transformation was identified as a promising method of analysing the IOP pulse and this technique was developed. 2. Investigation of ocular and systemic variables that affect IOP pulse measurements In order to recognise clinically important changes in IOP pulse measurement, studies were performed to identify influencing factors. Fourier analysis was tested against traditional parameters in order to assess its ability to detect differences in IOP pulse. In addition, it had been speculated that the waveform components of the IOP pulse contained vascular characteristic analogous to those components found in arterial pulse waves. Validation studies to test this hypothesis were attempted. 3. The nature of the intraocular pressure pulse in health and disease and its relation to systemic cardiovascular variables. Fourier analysis and traditional parameters were applied to the IOP pulse measurements taken on diseased and healthy eyes. Only the derived parameter, pulsatile ocular blood flow (POBF) detected differences in diseased groups. The use of an ocular pressure-volume relationship may have improved the POBF measure’s variance in comparison to the measurement of the pulse’s amplitude or Fourier components. Finally, the importance of the driving force of pulsatile blood flow, the arterial pressure pulse, is highlighted. A method of combining the measurements of pulsatile blood flow and pulsatile blood pressure to create a measure of ocular vascular impedance is described along with its advantages for future studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To assess the validity and repeatability of objective compared to subjective contact lens fit analysis. Methods: Thirty-five subjects (aged 22.0. ±. 3.0 years) wore two different soft contact lens designs. Four lens fit variables: centration, horizontal lag, post-blink movement in up-gaze and push-up recovery speed were assessed subjectively (four observers) and objectively from slit-lamp biomicroscopy captured images and video. The analysis was repeated a week later. Results: The average of the four experienced observers was compared to objective measures, but centration, movement on blink, lag and push-up recovery speed all varied significantly between them (p <. 0.001). Horizontal lens centration was on average close to central as assessed both objectively and subjectively (p > 0.05). The 95% confidence interval of subjective repeatability was better than objective assessment (±0.128. mm versus ±0.168. mm, p = 0.417), but utilised only 78% of the objective range. Vertical centration assessed objectively showed a slight inferior decentration (0.371. ±. 0.381. mm) with good inter- and intrasession repeatability (p > 0.05). Movement-on-blink was lower estimated subjectively than measured objectively (0.269. ±. 0.179. mm versus 0.352. ±. 0.355. mm; p = 0.035), but had better repeatability (±0.124. mm versus ±0.314. mm 95% confidence interval) unless correcting for the smaller range (47%). Horizontal lag was lower estimated subjectively (0.562. ±. 0.259. mm) than measured objectively (0.708. ±. 0.374. mm, p <. 0.001), had poorer repeatability (±0.132. mm versus ±0.089. mm 95% confidence interval) and had a smaller range (63%). Subjective categorisation of push-up speed of recovery showed reasonable differentiation relative to objective measurement (p <. 0.001). Conclusions: The objective image analysis allows an accurate, reliable and repeatable assessment of soft contact lens fit characteristics, being a useful tool for research and optimisation of lens fit in clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Upgrade of hydrogen to valuable fuel is a central topic in modern research due to its high availability and low price. For the difficulties in hydrogen storage, different pathways are still under investigation. A promising way is in the liquid-phase chemical hydrogen storage materials, because they can lead to greener transformation processes with the on line development of hydrogen for fuel cells. The aim of my work was the optimization of catalysts for the decomposition of formic acid made by sol immobilisation method (a typical colloidal method). Formic acid was selected because of the following features: it is a versatile renewable reagent for green synthesis studies. The first aim of my research was the synthesis and optimisation of Pd nanoparticles by sol-immobilisation to achieve better catalytic performances and investigate the effect of particle size, oxidation state, role of stabiliser and nature of the support. Palladium was chosen because it is a well-known active metal for the catalytic decomposition of formic acid. Noble metal nanoparticles of palladium were immobilized on carbon charcoal and on titania. In the second part the catalytic performance of the “homemade” catalyst Pd/C to a commercial Pd/C and the effect of different monometallic and bimetallic systems (AuxPdy) in the catalytic formic acid decomposition was investigated. The training period for the production of this work was carried out at the University of Cardiff (Group of Dr. N. Dimitratos).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat fragmentation and the consequently the loss of connectivity between populations can reduce the individuals interchange and gene flow, increasing the chances of inbreeding, and the increase the risk of local extinction. Landscape genetics is providing more and better tools to identify genetic barriers.. To our knowledge, no comparison of methods in terms of consistency has been made with observed data and species with low dispersal ability. The aim of this study is to examine the consistency of the results of five methods to detect barriers to gene flow in a Mediterranean pine vole population Microtus duodecimcostatus: F-statistics estimations, Non-Bayesian clustering, Bayesian clustering, Boundary detection and Simple/Partial Mantel tests. All methods were consistent in detecting the stream as a non-genetic barrier. However, no consistency in results among the methods were found regarding the role of the highway as a genetic barrier. Fst, Bayesian clustering assignment test and Partial Mantel test identifyed the highway as a filter to individual interchange. The Mantel tests were the most sensitive method. Boundary detection method (Monmonier’s Algorithm) and Non-Bayesian approaches did not detect any genetic differentiation of the pine vole due to the highway. Based on our findings we recommend that the genetic barrier detection in low dispersal ability populations should be analyzed with multiple methods such as Mantel tests, Bayesian clustering approaches because they show more sensibility in those scenarios and with boundary detection methods by having the aim of detect drastic changes in a variable of interest between the closest individuals. Although simulation studies highlight the weaknesses and the strengths of each method and the factors that promote some results, tests with real data are needed to increase the effectiveness of genetic barrier detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The use of artificial endoprostheses has become a routine procedure for knee and hip joints while ankle arthritis has traditionally been treated by means of arthrodesis. Due to its advantages, the implantation of endoprostheses is constantly increasing. While finite element analyses (FEA) of strain-adaptive bone remodelling have been carried out for the hip joint in previous studies, to our knowledge there are no investigations that have considered remodelling processes of the ankle joint. In order to evaluate and optimise new generation implants of the ankle joint, as well as to gain additional knowledge regarding the biomechanics, strain-adaptive bone remodelling has been calculated separately for the tibia and the talus after providing them with an implant. Methods: FE models of the bone-implant assembly for both the tibia and the talus have been developed. Bone characteristics such as the density distribution have been applied corresponding to CT scans. A force of 5,200 N, which corresponds to the compression force during normal walking of a person with a weight of 100 kg according to Stauffer et al., has been used in the simulation. The bone adaptation law, previously developed by our research team, has been used for the calculation of the remodelling processes. Results: A total bone mass loss of 2% in the tibia and 13% in the talus was calculated. The greater decline of density in the talus is due to its smaller size compared to the relatively large implant dimensions causing remodelling processes in the whole bone tissue. In the tibia, bone remodelling processes are only calculated in areas adjacent to the implant. Thus, a smaller bone mass loss than in the talus can be expected. There is a high agreement between the simulation results in the distal tibia and the literature regarding. Conclusions: In this study, strain-adaptive bone remodelling processes are simulated using the FE method. The results contribute to a better understanding of the biomechanical behaviour of the ankle joint and hence are useful for the optimisation of the implant geometry in the future.