572 resultados para multiple objective programming


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thematic issue on education and the politics of becoming focuses on how a Multiple Literacies Theory (MLT) plugs into practice in education. MLT does this by creating an assemblage between discourse, text, resonance and sensations. What does this produce? Becoming AND how one might live are the product of an assemblage (May, 2005; Semetsky, 2003). In this paper, MLT is the approach that explores the connection between educational theory and practice through the lens of an empirical study of multilingual children acquiring multiple writing systems simultaneously. The introduction explicates discourse, text, resonance, sensation and becoming. The second section introduces certain Deleuzian concepts that plug into MLT. The third section serves as an introduction to MLT. The fourth section is devoted to the study by way of a rhizoanalysis. Finally, drawing on the concept of the rhizome, this article exits with potential lines of flight opened by MLT. These are becomings which highlight the significance of this work in terms of transforming not only how literacies are conceptualized, especially in minority language contexts, but also how one might live.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several tests have been devised in an attempt to detect behaviour modification due to training, supplements or diet in horses. These tests rely on subjective observations in combination with physiological measures, such as heart rate (HR) and plasma cortisol concentrations, but these measures do not definitively identify behavioural changes. The aim of the present studies was to develop an objective and relevant measure of horse reactivity. In Study 1, HR responses to auditory stimuli, delivered over 6 days, designed to safely startle six geldings confined to individual stalls was studied to determine if peak HR, unconfounded by physical exertion, was a reliable measure of reactivity. Both mean (±SEM) resting HR (39.5 ± 1.9 bpm) and peak HR (82 ± 5.5 bpm) in response to being startled in all horses were found to be consistent over the 6 days. In Study 2, HR, plasma cortisol concentrations and speed of departure from an enclosure (reaction speed (RS)) in response to a single stimulus of six mares were measured when presented daily over 6 days. Peak HR response (133 ± 4 bpm) was consistent over days for all horses, but RS increased (3.02 ± 0.72 m/s on Day 1 increasing to 4.45 ± 0.53 m/s on Day 6; P = 0.005). There was no effect on plasma cortisol, so this variable was not studied further. In Study 3, using the six geldings from Study 1, the RS test was refined and a different startle stimulus was used each day. Again, there was no change in peak HR (97.2 ± 5.8 bpm) or RS (2.9 ± 0.2 m/s on Day 1 versus 3.0 ± 0.7 m/s on Day 6) over time. In the final study, mild sedation using acepromazine maleate (0.04 mg/kg BW i.v.) decreased peak HR in response to a startle stimulus when the horses (n = 8) were confined to a stall (P = 0.006), but not in an outdoor environment when the RS test was performed. However, RS was reduced by the mild sedation (P = 0.02). In conclusion, RS may be used as a practical and objective test to measure both reactivity and changes in reactivity in horses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Effective management of multi-resistant organisms is an important issue for hospitals both in Australia and overseas. This study investigates the utility of using Bayesian Network (BN) analysis to examine relationships between risk factors and colonization with Vancomycin Resistant Enterococcus (VRE). Design: Bayesian Network Analysis was performed using infection control data collected over a period of 36 months (2008-2010). Setting: Princess Alexandra Hospital (PAH), Brisbane. Outcome of interest: Number of new VRE Isolates Methods: A BN is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). BN enables multiple interacting agents to be studied simultaneously. The initial BN model was constructed based on the infectious disease physician‟s expert knowledge and current literature. Continuous variables were dichotomised by using third quartile values of year 2008 data. BN was used to examine the probabilistic relationships between VRE isolates and risk factors; and to establish which factors were associated with an increased probability of a high number of VRE isolates. Software: Netica (version 4.16). Results: Preliminary analysis revealed that VRE transmission and VRE prevalence were the most influential factors in predicting a high number of VRE isolates. Interestingly, several factors (hand hygiene and cleaning) known through literature to be associated with VRE prevalence, did not appear to be as influential as expected in this BN model. Conclusions: This preliminary work has shown that Bayesian Network Analysis is a useful tool in examining clinical infection prevention issues, where there is often a web of factors that influence outcomes. This BN model can be restructured easily enabling various combinations of agents to be studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In developed countries the relationship between socioeconomic position (SEP) and health is unequivocal. Those who are socioeconomically disadvantaged are known to experience higher morbidity and mortality from a range of chronic diet-related conditions compared to those of higher SEP. Socioeconomic inequalities in diet are well established. Compared to their more advantaged counterparts, those of low SEP are consistently found to consume diets less consistent with dietary guidelines (i.e. higher in fat, salt and sugar and lower in fibre, fruit and vegetables). Although the reasons for dietary inequalities remain unclear, understanding how such differences arise is important for the development of strategies to reduce health inequalities. Both environmental (e.g. proximity of supermarkets, price, and availability of foods) and psychosocial (e.g. taste preference, nutrition knowledge) influences are proposed to account for inequalities in food choices. Although in the United States (US), United Kingdom (UK), and parts of Australia, environmental factors are associated with socioeconomic differences in food choices, these factors do not completely account for the observed inequalities. Internationally, this context has prompted calls for further exploration of the role of psychological and social factors in relation to inequalities in food choices. It is this task that forms the primary goal of this PhD research. In the small body of research examining the contribution of psychosocial factors to inequalities in food choices, studies have focussed on food cost concerns, nutrition knowledge or health concerns. These factors are generally found to be influential. However, since a range of psychosocial factors are known determinants of food choices in the general population, it is likely that a range of factors also contribute to inequalities in food choices. Identification of additional psychosocial factors of relevance to inequalities in food choices would provide new opportunities for health promotion, including the adaption of existing strategies. The methodological features of previous research have also hindered the advancement of knowledge in this area and a lack of qualitative studies has resulted in a dearth of descriptive information on this topic. This PhD investigation extends previous research by assessing a range of psychosocial factors in relation to inequalities in food choices using both quantitative and qualitative techniques. Secondary data analyses were undertaken using data obtained from two Brisbane-based studies, the Brisbane Food Study (N=1003, conducted in 2000), and the Sixty Families Study (N=60, conducted in 1998). Both studies involved main household food purchasers completing an interviewer-administered survey within their own home. Data pertaining to food-purchasing, and psychosocial, socioeconomic and demographic characteristics were collected in each study. The mutual goals of both the qualitative and quantitative phases of this investigation were to assess socioeconomic differences in food purchasing and to identify psychosocial factors relevant to any observed differences. The quantitative methods then additionally considered whether the associations examined differed according to the socioeconomic indicator used (i.e. income or education). The qualitative analyses made a unique contribution to this project by generating detailed descriptions of socioeconomic differences in psychosocial factors. Those with lower levels of income and education were found to make food purchasing choices less consistent with dietary guidelines compared to those of high SEP. The psychosocial factors identified as relevant to food-purchasing inequalities were: taste preferences, health concerns, health beliefs, nutrition knowledge, nutrition concerns, weight concerns, nutrition label use, and several other values and beliefs unique to particular socioeconomic groups. Factors more tenuously or inconsistently related to socioeconomic differences in food purchasing were cost concerns, and perceived adequacy of the family diet. Evidence was displayed in both the quantitative and qualitative analyses to suggest that psychosocial factors contribute to inequalities in food purchasing in a collective manner. The quantitative analyses revealed that considerable overlap in the socioeconomic variation in food purchasing was accounted for by key psychosocial factors of importance, including taste preference, nutrition concerns, nutrition knowledge, and health concerns. Consistent with these findings, the qualitative transcripts demonstrated the interplay between such influential psychosocial factors in determining food-purchasing choices. The qualitative analyses found socioeconomic differences in the prioritisation of psychosocial factors in relation to food choices. This is suggestive of complex cultural factors that distinguish advantaged and disadvantaged groups and result in socioeconomically distinct schemas related to health and food choices. Compared to those of high SEP, those of lower SEP were less likely to indicate that health concerns, nutrition concerns, or food labels influenced food choices, and exhibited lower levels of nutrition knowledge. In the absence of health or nutrition-related concerns, taste preferences tended to dominate the food purchasing choices of those of low SEP. Overall, while cost concerns did not appear to be a main determinant of socioeconomic differences in food purchasing, this factor had a dominant influence on the food choices of some of the most disadvantaged respondents included in this research. The findings of this study have several implications for health promotion. The integrated operation of psychosocial factors on food purchasing inequalities indicates that multiple psychosocial factors may be appropriate to target in health promotion. It also seems possible that the inter-relatedness of psychosocial factors would allow health promotion targeting a single psychosocial factor to have a flow-on affect in terms of altering other influential psychosocial factors. This research also suggests that current mass marketing approaches to health promotion may not be effective across all socioeconomic groups due to differences in the priorities and main factors of influence in food purchasing decisions across groups. In addition to the practical recommendations for health promotion, this investigation, through the critique of previous research, and through the substantive study findings, has highlighted important methodological considerations for future research. Of particular note are the recommendations pertaining to the selection of socioeconomic indicators, measurement of relevant constructs, consideration of confounders, and development of an analytical approach. Addressing inequalities in health has been noted as a main objective by many health authorities and governments internationally. It is envisaged that the substantive and methodological findings of this thesis will make a useful contribution towards this important goal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Global Navigation Satellite Systems (GNSS)-based observation systems can provide high precision positioning and navigation solutions in real time, in the order of subcentimetre if we make use of carrier phase measurements in the differential mode and deal with all the bias and noise terms well. However, these carrier phase measurements are ambiguous due to unknown, integer numbers of cycles. One key challenge in the differential carrier phase mode is to fix the integer ambiguities correctly. On the other hand, in the safety of life or liability-critical applications, such as for vehicle safety positioning and aviation, not only is high accuracy required, but also the reliability requirement is important. This PhD research studies to achieve high reliability for ambiguity resolution (AR) in a multi-GNSS environment. GNSS ambiguity estimation and validation problems are the focus of the research effort. Particularly, we study the case of multiple constellations that include initial to full operations of foreseeable Galileo, GLONASS and Compass and QZSS navigation systems from next few years to the end of the decade. Since real observation data is only available from GPS and GLONASS systems, the simulation method named Virtual Galileo Constellation (VGC) is applied to generate observational data from another constellation in the data analysis. In addition, both full ambiguity resolution (FAR) and partial ambiguity resolution (PAR) algorithms are used in processing single and dual constellation data. Firstly, a brief overview of related work on AR methods and reliability theory is given. Next, a modified inverse integer Cholesky decorrelation method and its performance on AR are presented. Subsequently, a new measure of decorrelation performance called orthogonality defect is introduced and compared with other measures. Furthermore, a new AR scheme considering the ambiguity validation requirement in the control of the search space size is proposed to improve the search efficiency. With respect to the reliability of AR, we also discuss the computation of the ambiguity success rate (ASR) and confirm that the success rate computed with the integer bootstrapping method is quite a sharp approximation to the actual integer least-squares (ILS) method success rate. The advantages of multi-GNSS constellations are examined in terms of the PAR technique involving the predefined ASR. Finally, a novel satellite selection algorithm for reliable ambiguity resolution called SARA is developed. In summary, the study demonstrats that when the ASR is close to one, the reliability of AR can be guaranteed and the ambiguity validation is effective. The work then focuses on new strategies to improve the ASR, including a partial ambiguity resolution procedure with a predefined success rate and a novel satellite selection strategy with a high success rate. The proposed strategies bring significant benefits of multi-GNSS signals to real-time high precision and high reliability positioning services.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universities are more and more challenged by the emerging global higher education market, facilitated by advances in Information and Communication Technologies (ICT). This requires them to reconsider their mission and direction in order to function effectively and efficiently, and to be responsive to changes in their environment. In the face of increasing demands and competitive pressures, Universities like other companies, seek to continuously innovate and improve their performance. Universities are considering co-operating or sharing, both internally and externally, in a wide range of areas to achieve cost effectiveness and improvements in performance. Shared services are an effective model for re-organizing to reduce costs, increase quality and create new capabilities. Shared services are not limited to the Higher Education (HE) sector. Organizations across different sectors are adopting shared services, in particular for support functions such as Finance, Accounting, Human Resources and Information Technology. While shared services has been around for more than three decades, commencing in the 1970’s in the banking sector and then been adopted by other sectors, it is an under researched domain, with little consensus on the most fundamental issues even as basic as defining what shared services is. Moreover, the interest in shared services within Higher Education is a global phenomenon. This study on shared services is situated within the Higher Education Sector of Malaysia, and originated as an outcome resulting from a national project (2005 – 2007) conducted by the Ministry of Higher Education (MOHE) entitled "Knowledge, Information Communication Technology Strategic Plan (KICTSP) for Malaysian Public Higher Education"- where progress towards more collaborations via shared services was a key recommendation. The study’s primary objective was to understand the nature and potential for ICT shared services, in particular in the Malaysian HE sector; by laying a foundation in terms of definition, typologies and research agenda and deriving theoretically based conceptualisations of the potential benefits of shared services, success factors and issues of pursuing shared services. The study embarked on this objective with a literature review and pilot case study as a means to further define the context of the study, given the current under-researched status of ICT shared services and of shared services in Higher Education. This context definition phase illustrated a range of unaddressed issues; including a lack of common understanding of what shared services are, how they are formed, what objectives they full fill, who is involved etc. The study thus embarked on a further investigation of a more foundational nature with an exploratory phase that aimed to address these gaps, where a detailed archival analysis of shared services literature within the IS context was conducted to better understand shared services from an IS perspective. The IS literature on shared services was analysed in depth to report on the current status of shared services research in the IS domain; in particular definitions, objectives, stakeholders, the notion of sharing, theories used, and research methods applied were analysed, which provided a firmer base to this study’s design. The study also conducted a detailed content analysis of 36 cases (globally) of shared services implementations in the HE sector to better understand how shared services are structured within the HE sector and what is been shared. The results of the context definition phase and exploratory phase formed a firm basis in the multiple case studies phase which was designed to address the primary goals of this study (as presented above). Three case sites within the Malaysian HE sector was included in this analysis, resulting in empirically supported theoretical conceptualizations of shared services success factors, issues and benefits. A range of contributions are made through this study. First, the detailed archival analysis of shared services in Information Systems (IS) demonstrated the dearth of research on shared services within Information Systems. While the existing literature was synthesised to contribute towards an improved understanding of shared services in the IS domain, the areas that are yet under-developed and requires further exploration is identified and presented as a proposed research agenda for the field. This study also provides theoretical considerations and methodological guidelines to support the research agenda; to conduct better empirical research in this domain. A number of literatures based a priori frameworks (i.e. on the forms of sharing and shared services stakeholders etc) are derived in this phase, contributing to practice and research with early conceptualisations of critical aspects of shared services. Furthermore, the comprehensive archival analysis design presented and executed here is an exemplary approach of a systematic, pre-defined and tool-supported method to extract, analyse and report literature, and is documented as guidelines that can be applied for other similar literature analysis, with particular attention to supporting novice researchers. Second, the content analysis of 36 shared services initiatives in the Higher Education sector presented eight different types of structural arrangements for shared services, as observed in practice, and the salient dimensions along which those types can be usefully differentiated. Each of the eight structural arrangement types are defined and demonstrated through case examples, with further descriptive details and insights to what is shared and how the sharing occurs. This typology, grounded on secondary empirical evidence, can serve as a useful analytical tool for researchers investigating the shared services phenomenon further, and for practitioners considering the introduction or further development of shared services. Finally, the multiple case studies conducted in the Malaysian Higher Education sector, provided further empirical basis to instantiate the conceptual frameworks and typology derived from the prior phases and develops an empirically supported: (i) framework of issues and challenges, (ii) a preliminary theory of shared services success, and (iii) a benefits framework, for shared services in the Higher Education sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A procedure for the evaluation of multiple scattering contributions is described, for deep inelastic neutron scattering (DINS) studies using an inverse geometry time-of-flight spectrometer. The accuracy of a Monte Carlo code DINSMS, used to calculate the multiple scattering, is tested by comparison with analytic expressions and with experimental data collected from polythene, polycrystalline graphite and tin samples. It is shown that the Monte Carlo code gives an accurate representation of the measured data and can therefore be used to reliably correct DINS data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Welcome to the Quality assessment matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systematic manner. The primary purpose of the Quality assessment matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being read for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of ‘cyberphysical’ programming, and considers the type of visual annotations which may be helpful in this programming context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plug-in electric vehicles (PEVs) are increasingly popular in the global trend of energy saving and environmental protection. However, the uncoordinated charging of numerous PEVs can produce significant negative impacts on the secure and economic operation of the power system concerned. In this context, a hierarchical decomposition approach is presented to coordinate the charging/discharging behaviors of PEVs. The major objective of the upper-level model is to minimize the total cost of system operation by jointly dispatching generators and electric vehicle aggregators (EVAs). On the other hand, the lower-level model aims at strictly following the dispatching instructions from the upper-level decision-maker by designing appropriate charging/discharging strategies for each individual PEV in a specified dispatching period. Two highly efficient commercial solvers, namely AMPL/IPOPT and AMPL/CPLEX, respectively, are used to solve the developed hierarchical decomposition model. Finally, a modified IEEE 118-bus testing system including 6 EVAs is employed to demonstrate the performance of the developed model and method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the electricity market environment, coordination of system reliability and economics of a power system is of great significance in determining the available transfer capability (ATC). In addition, the risks associated with uncertainties should be properly addressed in the ATC determination process for risk-benefit maximization. Against this background, it is necessary that the ATC be optimally allocated and utilized within relative security constraints. First of all, the non-sequential Monte Carlo stimulation is employed to derive the probability density distribution of ATC of designated areas incorporating uncertainty factors. Second, on the basis of that, a multi-objective optimization model is formulated to determine the multi-area ATC so as to maximize the risk-benefits. Then, the solution to the developed model is achieved by the fast non-dominated sorting (NSGA-II) algorithm, which could decrease the risk caused by uncertainties while coordinating the ATCs of different areas. Finally, the IEEE 118-bus test system is served for demonstrating the essential features of the developed model and employed algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce the use of Ingenuity Pathway Analysis to analyzing global metabonomics in order to characterize phenotypically biochemical perturbations and the potential mechanisms of the gentamicin-induced toxicity in multiple organs. A single dose of gentamicin was administered to Sprague Dawley rats (200 mg/kg, n = 6) and urine samples were collected at -24-0 h pre-dosage, 0-24, 24-48, 48-72 and 72-96 h post-dosage of gentamicin. The urine metabonomics analysis was performed by UPLC/MS, and the mass spectra signals of the detected metabolites were systematically deconvoluted and analyzed by pattern recognition analyses (Heatmap, PCA and PLS-DA), revealing a time-dependency of the biochemical perturbations induced by gentamicin toxicity. As result, the holistic metabolome change induced by gentamicin toxicity in the animal's organisms was characterized. Several metabolites involved in amino acid metabolism were identified in urine, and it was confirmed that gentamicin biochemical perturbations can be foreseen from these biomarkers. Notoriously, it was found that gentamicin induced toxicity in multiple organs system in the laboratory rats. The proof-of-knowledge based Ingenuity Pathway Analysis revealed gentamicin induced liver and heart toxicity, along with the previously known toxicity in kidney. The metabolites creatine, nicotinic acid, prostaglandin E2, and cholic acid were identified and validated as phenotypic biomarkers of gentamicin induced toxicity. Altogether, the significance of the use of metabonomics analyses in the assessment of drug toxicity is highlighted once more; furthermore, this work demonstrated the powerful predictive potential of the Ingenuity Pathway Analysis to study of drug toxicity and its valuable complementation for metabonomics based assessment of the drug toxicity.