895 resultados para System verification and analysis
Resumo:
There has been an increased use of the Doubly-Fed Induction Machine (DFIM) in ac drive applications in recent times, particularly in the field of renewable energy systems and other high power variable-speed drives. The DFIM is widely regarded as the optimal generation system for both onshore and offshore wind turbines and has also been considered in wave power applications. Wind power generation is the most mature renewable technology. However, wave energy has attracted a large interest recently as the potential for power extraction is very significant. Various wave energy converter (WEC) technologies currently exist with the oscillating water column (OWC) type converter being one of the most advanced. There are fundemental differences in the power profile of the pneumatic power supplied by the OWC WEC and that of a wind turbine and this causes significant challenges in the selection and rating of electrical generators for the OWC devises. The thesis initially aims to provide an accurate per-phase equivalent circuit model of the DFIM by investigating various characterisation testing procedures. Novel testing methodologies based on the series-coupling tests is employed and is found to provide a more accurate representation of the DFIM than the standard IEEE testing methods because the series-coupling tests provide a direct method of determining the equivalent-circuit resistances and inductances of the machine. A second novel method known as the extended short-circuit test is also presented and investigated as an alternative characterisation method. Experimental results on a 1.1 kW DFIM and a 30 kW DFIM utilising the various characterisation procedures are presented in the thesis. The various test methods are analysed and validated through comparison of model predictions and torque-versus-speed curves for each induction machine. Sensitivity analysis is also used as a means of quantifying the effect of experimental error on the results taken from each of the testing procedures and is used to determine the suitability of the test procedures for characterising each of the devices. The series-coupling differential test is demonstrated to be the optimum test. The research then focuses on the OWC WEC and the modelling of this device. A software model is implemented based on data obtained from a scaled prototype device situated at the Irish test site. Test data from the electrical system of the device is analysed and this data is used to develop a performance curve for the air turbine utilised in the WEC. This performance curve was applied in a software model to represent the turbine in the electro-mechanical system and the software results are validated by the measured electrical output data from the prototype test device. Finally, once both the DFIM and OWC WEC power take-off system have been modeled succesfully, an investigation of the application of the DFIM to the OWC WEC model is carried out to determine the electrical machine rating required for the pulsating power derived from OWC WEC device. Thermal analysis of a 30 kW induction machine is carried out using a first-order thermal model. The simulations quantify the limits of operation of the machine and enable thedevelopment of rating requirements for the electrical generation system of the OWC WEC. The thesis can be considered to have three sections. The first section of the thesis contains Chapters 2 and 3 and focuses on the accurate characterisation of the doubly-fed induction machine using various testing procedures. The second section, containing Chapter 4, concentrates on the modelling of the OWC WEC power-takeoff with particular focus on the Wells turbine. Validation of this model is carried out through comparision of simulations and experimental measurements. The third section of the thesis utilises the OWC WEC model from Chapter 4 with a 30 kW induction machine model to determine the optimum device rating for the specified machine. Simulations are carried out to perform thermal analysis of the machine to give a general insight into electrical machine rating for an OWC WEC device.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
Dopamine is an important central nervous system transmitter that functions through two classes of receptors (D1 and D2) to influence a diverse range of biological processes in vertebrates. With roles in regulating neural activity, behavior, and gene expression, there has been great interest in understanding the function and evolution dopamine and its receptors. In this study, we use a combination of sequence analyses, microsynteny analyses, and phylogenetic relationships to identify and characterize both the D1 (DRD1A, DRD1B, DRD1C, and DRD1E) and D2 (DRD2, DRD3, and DRD4) dopamine receptor gene families in 43 recently sequenced bird genomes representing the major ordinal lineages across the avian family tree. We show that the common ancestor of all birds possessed at least seven D1 and D2 receptors, followed by subsequent independent losses in some lineages of modern birds. Through comparisons with other vertebrate and invertebrate species we show that two of the D1 receptors, DRD1A and DRD1B, and two of the D2 receptors, DRD2 and DRD3, originated from a whole genome duplication event early in the vertebrate lineage, providing the first conclusive evidence of the origin of these highly conserved receptors. Our findings provide insight into the evolutionary development of an important modulatory component of the central nervous system in vertebrates, and will help further unravel the complex evolutionary and functional relationships among dopamine receptors.
Resumo:
An extensive literature base worldwide demonstrates how spatial differences in estuarine fish assemblages are related to those in the environment at (bio)regional, estuary-wide or local (within-estuary) scales. Few studies, however, have examined all three scales, and those including more than one have often focused at the level of individual environmental variables rather than scales as a whole. This study has identified those spatial scales of environmental differences, across regional, estuary-wide and local levels, that are most important in structuring ichthyofaunal composition throughout south-western Australian estuaries. It is the first to adopt this approach for temperate microtidal waters. To achieve this, we have employed a novel approach to the BIOENV routine in PRIMER v6 and a modified global BEST test in an alpha version of PRIMER v7. A combination of all three scales best matched the pattern of ichthyofaunal differences across the study area (rho = 0.59; P = 0.001), with estuary-wide and regional scales accounting for about twice the variability of local scales. A shade plot analysis showed these broader-scale ichthyofaunal differences were driven by a greater diversity of marine and estuarine species in the permanently-open west coast estuaries and higher numbers of several small estuarine species in the periodically-open south coast estuaries. When interaction effects were explored, strong but contrasting influences of local environmental scales were revealed within each region and estuary type. A quantitative decision tree for predicting the fish fauna at any nearshore estuarine site in south-western Australia has also been produced. The estuarine management implications of the above findings are highlighted.
Resumo:
Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.
Resumo:
While WiFi monitoring networks have been deployed in previous research, to date none have assessed live network data from an open access, public environment. In this paper we describe the construction of a replicable, independent WLAN monitoring system and address some of the challenges in analysing the resultant traffic. Analysis of traffic from the system demonstrates that basic traffic information from open-access networks varies over time (temporal inconsistency). The results also show that arbitrary selection of Request-Reply intervals can have a significant effect on Probe and Association frame exchange calculations, which can impact on the ability to detect flooding attacks.
Resumo:
Does the use of HRM practices by multinational companies (MNCs) reflect their national origins or are practices similar regardless of context? To the extent that practices are similar, is there any evidence of global best standards? The authors use the system, societal, and dominance framework to address these questions through analysis of 1,100 MNC subsidiaries in Canada, Ireland, Spain, and the United Kingdom. They argue that this framework offers a richer account than alternatives such as varieties of capitalism. The study moves beyond previous research by differentiating between system effects at the global level and dominance effects arising from the diffusion of practices from a dominant economy. It shows that both effects are present, as are some differences at the societal level. Results suggest that MNCs configure their HRM practices in response to all three forces rather than to some uniform global best practices or to their national institutional contexts.
Resumo:
As a newly invented parallel kinematic machine (PKM), Exechon has attracted intensive attention from both academic and industrial fields due to its conceptual high performance. Nevertheless, the dynamic behaviors of Exechon PKM have not been thoroughly investigated because of its structural and kinematic complexities. To identify the dynamic characteristics of Exechon PKM, an elastodynamic model is proposed with the substructure synthesis technique in this paper. The Exechon PKM is divided into a moving platform subsystem, a fixed base subsystem and three limb subsystems according to its structural features. Differential equations of motion for the limb subsystem are derived through finite element (FE) formulations by modeling the complex limb structure as a spatial beam with corresponding geometric cross sections. Meanwhile, revolute, universal, and spherical joints are simplified into virtual lumped springs associated with equivalent stiffnesses and mass at their geometric centers. Differential equations of motion for the moving platform are derived with Newton's second law after treating the platform as a rigid body due to its comparatively high rigidity. After introducing the deformation compatibility conditions between the platform and the limbs, governing differential equations of motion for Exechon PKM are derived. The solution to characteristic equations leads to natural frequencies and corresponding modal shapes of the PKM at any typical configuration. In order to predict the dynamic behaviors in a quick manner, an algorithm is proposed to numerically compute the distributions of natural frequencies throughout the workspace. Simulation results reveal that the lower natural frequencies are strongly position-dependent and distributed axial-symmetrically due to the structure symmetry of the limbs. At the last stage, a parametric analysis is carried out to identify the effects of structural, dimensional, and stiffness parameters on the system's dynamic characteristics with the purpose of providing useful information for optimal design and performance improvement of the Exechon PKM. The elastodynamic modeling methodology and dynamic analysis procedure can be well extended to other overconstrained PKMs with minor modifications.
Resumo:
A good verification strategy should bring near the simulation and real functioning environments. In this paper we describe a system-level co-verification strategy that uses a common flow for functional simulation, timing simulation and functional debug. This last step requires using a BST infrastructure, now widely available on commercial devices, specially on FPGAs with medium/large pin-counts.
Resumo:
An overwhelming problem in Math Curriculums in Higher Education Institutions (HEI), we are daily facing in the last decade, is the substantial differences in Math background of our students. When you try to transmit, engage and teach subjects/contents that your “audience” is unable to respond to and/or even understand what we are trying to convey, it is somehow frustrating. In this sense, the Math projects and other didactic strategies, developed through Learning Management System Moodle, which include an array of activities that combine higher order thinking skills with math subjects and technology, for students of HE, appear as remedial but important, proactive and innovative measures in order to face and try to overcome these considerable problems. In this paper we will present some of these strategies, developed in some organic units of the Polytechnic Institute of Porto (IPP). But, how “fruitful” are the endless number of hours teachers spent in developing and implementing these platforms? Do students react to them as we would expect? Do they embrace this opportunity to overcome their difficulties? How do they use/interact individually with LMS platforms? Can this environment that provides the teacher with many interesting tools to improve the teaching – learning process, encourages students to reinforce their abilities and knowledge? In what way do they use each available material – videos, interactive tasks, texts, among others? What is the best way to assess student’s performance in these online learning environments? Learning Analytics tools provides us a huge amount of data, but how can we extract “good” and helpful information from them? These and many other questions still remain unanswered but we look forward to get some help in, at least, “get some drafts” for them because we feel that this “learning analysis”, that tackles the path from the objectives to the actual results, is perhaps the only way we have to move forward in the “best” learning and teaching direction.
Resumo:
BACKGROUND: To date, there is no quality assurance program that correlates patient outcome to perfusion service provided during cardiopulmonary bypass (CPB). A score was devised, incorporating objective parameters that would reflect the likelihood to influence patient outcome. The purpose was to create a new method for evaluating the quality of care the perfusionist provides during CPB procedures and to deduce whether it predicts patient morbidity and mortality. METHODS: We analysed 295 consecutive elective patients. We chose 10 parameters: fluid balance, blood transfused, Hct, ACT, PaO2, PaCO2, pH, BE, potassium and CPB time. Distribution analysis was performed using the Shapiro-Wilcoxon test. This made up the PerfSCORE and we tried to find a correlation to mortality rate, patient stay in the ICU and length of mechanical ventilation. Univariate analysis (UA) using linear regression was established for each parameter. Statistical significance was established when p < 0.05. Multivariate analysis (MA) was performed with the same parameters. RESULTS: The mean age was 63.8 +/- 12.6 years with 70% males. There were 180 CABG, 88 valves, and 27 combined CABG/valve procedures. The PerfSCORE of 6.6 +/- 2.4 (0-20), mortality of 2.7% (8/295), CPB time 100 +/- 41 min (19-313), ICU stay 52 +/- 62 hrs (7-564) and mechanical ventilation of 10.5 +/- 14.8 hrs (0-564) was calculated. CPB time, fluid balance, PaO2, PerfSCORE and blood transfused were significantly correlated to mortality (UA, p < 0.05). Also, CPB time, blood transfused and PaO2 were parameters predicting mortality (MA, p < 0.01). Only pH was significantly correlated for predicting ICU stay (UA). Ultrafiltration (UF) and CPB time were significantly correlated (UA, p < 0.01) while UF (p < 0.05) was the only parameter predicting mechanical ventilation duration (MA). CONCLUSIONS: CPB time, blood transfused and PaO2 are independent risk factors of mortality. Fluid balance, blood transfusion, PaO2, PerfSCORE and CPB time are independent parameters for predicting morbidity. PerfSCORE is a quality of perfusion measure that objectively quantifies perfusion performance.
Resumo:
The thesis is the outcome of the experimental and theoretical investigations on a new compact drum-shaped microstrip antenna. A new compact antenna suitable for personal communication system(PCS), Global position System(GPS) and array applications is developed and analysed. The generalised cavity model and spatial fourier transform technique are suitably modified for the analysis of the antenna. The predicted results are compared with experimental results and excellent agreement is observed. The experimental work done by the author in related fields are incorporated as three appendices in this thesis. A single feed dual frequency microstrip antenne is presented in appendix A.Appendix B describes a new broadband dual frequeny microstrip antenna. The bandwidth enhancement effect of microstrip antennas through dielectric resonator loading is demonstarted in Appendix C.
Resumo:
This thesis describes the development and analysis of an Isosceles Trapezoidal Dielectric Resonator Antenna (ITDRA) by realizing different DR orientations with suitable feed configurations enabling it to be used as multiband, dual band dual polarized and wideband applications. The motivation for this work has been inspired by the need for compact, high efficient, low cost antenna suitable for multi band application, dual band dual polarized operation and broadband operation with the possibility of using with MICs, and to ensure less expensive, more efficient and quality wireless communication systems. To satisfy these challenging demands a novel shaped Dielectric Resonator (DR) is fabricated and investigated for the possibility of above required properties by trying out different orientations of the DR on a simple microstrip feed and with slotted ground plane as well. The thesis initially discusses and evaluates recent and past developments taken place within the microwave industry on this topic through a concise review of literature. Then the theoretical aspects of DRA and different feeding techniques are described. Following this, fabrication and characterization of DRA is explained. To achieve the desired requirements as above both simulations and experimental measurements were undertaken. A 3-D finite element method (FEM) electromagnetic simulation tool, HFSSTM by Agilent, is used to determine the optimum geometry of the dielectric resonator. It was found to be useful in producing approximate results although it had some limitations. A numerical analysis technique, finite difference time domain (FDTD) is used for validating the results of wide band design at the end. MATLAB is used for modeling the ITDR and implementing FDTD analysis. In conclusion this work offers a new, efficient and relatively simple alternative for antennas to be used for multiple requirements in the wireless communication system.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
This paper describes a trainable system capable of tracking faces and facialsfeatures like eyes and nostrils and estimating basic mouth features such as sdegrees of openness and smile in real time. In developing this system, we have addressed the twin issues of image representation and algorithms for learning. We have used the invariance properties of image representations based on Haar wavelets to robustly capture various facial features. Similarly, unlike previous approaches this system is entirely trained using examples and does not rely on a priori (hand-crafted) models of facial features based on optical flow or facial musculature. The system works in several stages that begin with face detection, followed by localization of facial features and estimation of mouth parameters. Each of these stages is formulated as a problem in supervised learning from examples. We apply the new and robust technique of support vector machines (SVM) for classification in the stage of skin segmentation, face detection and eye detection. Estimation of mouth parameters is modeled as a regression from a sparse subset of coefficients (basis functions) of an overcomplete dictionary of Haar wavelets.