26 resultados para dynamic methods
Resumo:
This case describes a qualitative social science research project that was conducted in 2009 and that examined the experiences of recent migrants to Northern Ireland. While background to the research and key findings are presented, the topic forms a backdrop to the case. The following aspects of the study are presented: the theoretical context; formulating the research question, design and methodology; key methodological issues; data collection and analysis; project dissemination; and research funding and reporting. The case pays particular attention to the needs and impact of different groups including the researcher, the funding body, the researcher’s employer and the researched. The significance of access, language and ethics to this study are examined. Finally, the way in which the research unfolded in an often-unpredictable way throughout the implementation process is highlighted in the narrative.
Resumo:
Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.
Resumo:
The introduction of the Tesla in 2008 has demonstrated to the public of the potential of electric vehicles in terms of reducing fuel consumption and green-house gas from the transport sector. It has brought electric vehicles back into the spotlight worldwide at a moment when fossil fuel prices were reaching unexpected high due to increased demand and strong economic growth. The energy storage capabilities from of fleets of electric vehicles as well as the potentially random discharging and charging offers challenges to the grid in terms of operation and control. Optimal scheduling strategies are key to integrating large numbers of electric vehicles and the smart grid. In this paper, state-of-the-art optimization methods are reviewed on scheduling strategies for the grid integration with electric vehicles. The paper starts with a concise introduction to analytical charging strategies, followed by a review of a number of classical numerical optimization methods, including linear programming, non-linear programming, dynamic programming as well as some other means such as queuing theory. Meta-heuristic techniques are then discussed to deal with the complex, high-dimensional and multi-objective scheduling problem associated with stochastic charging and discharging of electric vehicles. Finally, future research directions are suggested.
Resumo:
Complex collaboration in rapidly changing business environments create challenges for management capability in Utility Horizontal Supply Chains (UHSCs) involving the deploying and evolving of performance measures. The aim of the study is twofold. First, there is a need to explore how management capability can be developed and used to deploy and evolve Performance Measurement (PM), both across a UHSC and within its constituent organisations, drawing upon a theoretical nexus of Dynamic Capability (DC) theory and complementary Goal Theory. Second, to make a contribution to knowledge by empirically building theory using these constructs to show the management motivations and behaviours within PM-based DCs. The methodology uses an interpretive theory building, multiple case based approach (n=3) as part of a USHC. The data collection methods include, interviews (n=54), focus groups (n=10), document analysis and participant observation (reflective learning logs) over a five-year period giving longitudinal data. The empirical findings lead to the development of a conceptual framework showing that management capabilities in driving PM deployment and evolution can be represented as multilevel renewal and incremental Dynamic Capabilities, which can be further understood in terms of motivation and behaviour by Goal-Theoretic constructs. In addition three interrelated cross cutting themes of management capabilities in consensus building, goal setting and resource change were identified. These management capabilities require carefully planned development and nurturing within the UHSC.
Resumo:
Dynamic economic load dispatch (DELD) is one of the most important steps in power system operation. Various optimisation algorithms for solving the problem have been developed; however, due to the non-convex characteristics and large dimensionality of the problem, it is necessary to explore new methods to further improve the dispatch results and minimise the costs. This article proposes a hybrid differential evolution (DE) algorithm, namely clonal selection-based differential evolution (CSDE), to solve the problem. CSDE is an artificial intelligence technique that can be applied to complex optimisation problems which are for example nonlinear, large scale, non-convex and discontinuous. This hybrid algorithm combines the clonal selection algorithm (CSA) as the local search technique to update the best individual in the population, which enhances the diversity of the solutions and prevents premature convergence in DE. Furthermore, we investigate four mutation operations which are used in CSA as the hyper-mutation operations. Finally, an efficient solution repair method is designed for DELD to satisfy the complicated equality and inequality constraints of the power system to guarantee the feasibility of the solutions. Two benchmark power systems are used to evaluate the performance of the proposed method. The experimental results show that the proposed CSDE/best/1 approach significantly outperforms nine other variants of CSDE and DE, as well as most other published methods, in terms of the quality of the solution and the convergence characteristics.
Resumo:
Modern cancer research on prognostic and predictive biomarkers demands the integration of established and emerging high-throughput technologies. However, these data are meaningless unless carefully integrated with patient clinical outcome and epidemiological information. Integrated datasets hold the key to discovering new biomarkers and therapeutic targets in cancer. We have developed a novel approach and set of methods for integrating and interrogating phenomic, genomic and clinical data sets to facilitate cancer biomarker discovery and patient stratification. Applied to a known paradigm, the biological and clinical relevance of TP53, PICan was able to recapitulate the known biomarker status and prognostic significance at a DNA, RNA and protein levels.
Resumo:
Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.
Resumo:
Purpose Previously, it has been reported that molecular mobility determines the rate of molecular approach to crystal surfaces, while entropy relates to the probability of that approaching molecule having the desirable configuration for further growth of the existing crystal; and the free energy dictates the probability of that molecule not returning to the liquid phase1. If we plot the crystal growth rate and viscosity of a supercooled liquid in a log-log format, the relationship between the two is linear, indicating the influence viscosity has upon crystal growth rate. However, such approximation has been derived from pure drug compounds and it is apparent that further understanding of crystallization from drug-polymer solid dispersion is required in order to stabilise drugs embedded within amorphous polymeric solid dispersions. Methods Mixtures of felodipine and polymer (HPMCAS-HF, PVPK15 and Soluplus®) at specified compositions were prepared using a Restch MM200 ball mill. To examine crystal growth within amorphous solid dispersions, samples were prepared by melting 5-10 mg of ball milled mixture at 150°C for 3-5 minutes on a glass slip pre-cleaned with methanol and acetone. All prepared samples were confirmed to be crystal free by visual observation using a polarised light microscope (Olympus BX50). Prepared samples were stored at 0% RH (P2O5), inside desiccators, maintained in ovens at 80°C. For the dynamic viscosity measurement, approximately 100-200mg ball milled mixture was heated on the base plate of a rotational rheometer at 150°C for 5 minutes and the top plate was lowered to a defined gap to form a good contact with the material. The sandwiched amorphous material was heated to 80°C and the viscosity was measured. Results The equation was used to probe the correlation of viscosity to crystal growth rate. In comparison to the value of xi in log-log equation reported from pure drug compound, a value of 1.63 was obtained for FD-polymer solid dispersions irrespective of the polymer involved. ∝ Conclusion The high xi value suggests stronger viscosity dependence may exist for amorphous FD once incorporated with amorphous polymer.
Resumo:
Damage detection in bridges using vibration-based methods is an area of growing research interest. Improved assessment
methodologies combined with state-of-the-art sensor technology are rapidly making these approaches applicable for real-world
structures. Applying these techniques to the detection and monitoring of scour around bridge foundations has remained
challenging; however this area has gained attraction in recent years. Several authors have investigated a range of methods but
there is still significant work required to achieve a rounded and widely applicable methodology to detect and monitor scour.This
paper presents a novel Vehicle-Bridge-Soil Dynamic Interaction (VBSDI) model which can be used to simulate the effect of scour
on an integral bridge. The model outputs dynamic signals which can be analysed to determine modal parameters and the variation
of these parameters with respect to scour can be examined.The key novelty of this model is that it is the first numerical model for
simulating scour that combines a realistic vehicle loadingmodel with a robust foundation soil responsemodel.This paper provides a
description of the model development and explains the mathematical theory underlying themodel. Finally a case study application
of the model using typical bridge, soil, and vehicle properties is provided.
Resumo:
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Resumo:
In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.