952 resultados para Instantaneous angular speed analysis
Resumo:
Analysis of the peak-to-peak output current ripple amplitude for multiphase and multilevel inverters is presented in this PhD thesis. The current ripple is calculated on the basis of the alternating voltage component, and peak-to-peak value is defined by the current slopes and application times of the voltage levels in a switching period. Detailed analytical expressions of peak-to-peak current ripple distribution over a fundamental period are given as function of the modulation index. For all the cases, reference is made to centered and symmetrical switching patterns, generated either by carrier-based or space vector PWM. Starting from the definition and the analysis of the output current ripple in three-phase two-level inverters, the theoretical developments have been extended to the case of multiphase inverters, with emphasis on the five- and seven-phase inverters. The instantaneous current ripple is introduced for a generic balanced multiphase loads consisting of series RL impedance and ac back emf (RLE). Simplified and effective expressions to account for the maximum of the output current ripple have been defined. The peak-to-peak current ripple diagrams are presented and discussed. The analysis of the output current ripple has been extended also to multilevel inverters, specifically three-phase three-level inverters. Also in this case, the current ripple analysis is carried out for a balanced three-phase system consisting of series RL impedance and ac back emf (RLE), representing both motor loads and grid-connected applications. The peak-to-peak current ripple diagrams are presented and discussed. In addition, simulation and experimental results are carried out to prove the validity of the analytical developments in all the cases. The cases with different phase numbers and with different number of levels are compared among them, and some useful conclusions have been pointed out. Furthermore, some application examples are given.
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Aim of this research is the development and validation of a comprehensive multibody motorcycle model featuring rigid-ring tires, taking into account both slope and roughness of road surfaces. A novel parametrization for the general kinematics of the motorcycle is proposed, using a mixed reference-point and relative-coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to efficiently include rigid-ring kinematics as well as road elevation and slope. The equations of motion for the multibody system are derived symbolically and the constraint equations arising from the dependent-coordinate formulation are handled using a projection technique. Therefore the resulting system of equations can be integrated in time domain using a standard ODE algorithm. The model is validated with respect to maneuvers experimentally measured on the race track, showing consistent results and excellent computational efficiency. More in detail, it is also capable of reproducing the chatter vibration of racing motorcycles. The chatter phenomenon, appearing during high speed cornering maneuvers, consists of a self-excited vertical oscillation of both the front and rear unsprung masses in the range of frequency between 17 and 22 Hz. A critical maneuver is numerically simulated, and a self-excited vibration appears, consistent with the experimentally measured chatter vibration. Finally, the driving mechanism for the self-excitation is highlighted and a physical interpretation is proposed.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.
Modelling, diagnostics and experimental analysis of plasma assisted processes for material treatment
Resumo:
This work presents results from experimental investigations of several different atmospheric pressure plasmas applications, such as Metal Inert Gas (MIG) welding and Plasma Arc Cutting (PAC) and Welding (PAW) sources, as well as Inductively Coupled Plasma (ICP) torches. The main diagnostic tool that has been used is High Speed Imaging (HSI), often assisted by Schlieren imaging to analyse non-visible phenomena. Furthermore, starting from thermo-fluid-dynamic models developed by the University of Bologna group, such plasma processes have been studied also with new advanced models, focusing for instance on the interaction between a melting metal wire and a plasma, or considering non-equilibrium phenomena for diagnostics of plasma arcs. Additionally, the experimental diagnostic tools that have been developed for industrial thermal plasmas have been used also for the characterization of innovative low temperature atmospheric pressure non equilibrium plasmas, such as dielectric barrier discharges (DBD) and Plasma Jets. These sources are controlled by few kV voltage pulses with pulse rise time of few nanoseconds to avoid the formation of a plasma arc, with interesting applications in surface functionalization of thermosensitive materials. In order to investigate also bio-medical applications of thermal plasma, a self-developed quenching device has been connected to an ICP torch. Such device has allowed inactivation of several kinds of bacteria spread on petri dishes, by keeping the substrate temperature lower than 40 degrees, which is a strict requirement in order to allow the treatment of living tissues.
Resumo:
In this Thesis, we study the accretion of mass and angular momentum onto the disc of spiral galaxies from a global and a local perspective and comparing theory predictions with several observational data. First, we propose a method to measure the specific mass and radial growth rates of stellar discs, based on their star formation rate density profiles and we apply it to a sample of nearby spiral galaxies. We find a positive radial growth rate for almost all galaxies in our sample. Our galaxies grow in size, on average, at one third of the rate at which they grow in mass. Our results are in agreement with theoretical expectations if known scaling relations of disc galaxies are not evolving with time. We also propose a novel method to reconstruct accretion profiles and the local angular momentum of the accreting material from the observed structural and chemical properties of spiral galaxies. Applied to the Milky Way and to one external galaxy, our analysis indicates that accretion occurs at relatively large radii and has a local deficit of angular momentum with respect to the disc. Finally, we show how structure and kinematics of hot gaseous coronae, which are believed to be the source of mass and angular momentum of massive spiral galaxies, can be reconstructed from their angular momentum and entropy distributions. We find that isothermal models with cosmologically motivated angular momentum distributions are compatible with several independent observational constraints. We also consider more complex baroclinic equilibria: we describe a new parametrization for these states, a new self-similar family of solution and a method for reconstructing structure and kinematics from the joint angular momentum/entropy distribution.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.
Resumo:
The Default Mode Network (DMN) is a higher order functional neural network that displays activation during passive rest and deactivation during many types of cognitive tasks. Accordingly, the DMN is viewed to represent the neural correlate of internally-generated self-referential cognition. This hypothesis implies that the DMN requires the involvement of cognitive processes, like declarative memory. The present study thus examines the spatial and functional convergence of the DMN and the semantic memory system. Using an active block-design functional Magnetic Resonance Imaging (fMRI) paradigm and Independent Component Analysis (ICA), we trace the DMN and fMRI signal changes evoked by semantic, phonological and perceptual decision tasks upon visually-presented words. Our findings show less deactivation during semantic compared to the two non-semantic tasks for the entire DMN unit and within left-hemispheric DMN regions, i.e., the dorsal medial prefrontal cortex, the anterior cingulate cortex, the retrosplenial cortex, the angular gyrus, the middle temporal gyrus and the anterior temporal region, as well as the right cerebellum. These results demonstrate that well-known semantic regions are spatially and functionally involved in the DMN. The present study further supports the hypothesis of the DMN as an internal mentation system that involves declarative memory functions.
Resumo:
Motivation: Array CGH technologies enable the simultaneous measurement of DNA copy number for thousands of sites on a genome. We developed the circular binary segmentation (CBS) algorithm to divide the genome into regions of equal copy number (Olshen {\it et~al}, 2004). The algorithm tests for change-points using a maximal $t$-statistic with a permutation reference distribution to obtain the corresponding $p$-value. The number of computations required for the maximal test statistic is $O(N^2),$ where $N$ is the number of markers. This makes the full permutation approach computationally prohibitive for the newer arrays that contain tens of thousands markers and highlights the need for a faster. algorithm. Results: We present a hybrid approach to obtain the $p$-value of the test statistic in linear time. We also introduce a rule for stopping early when there is strong evidence for the presence of a change. We show through simulations that the hybrid approach provides a substantial gain in speed with only a negligible loss in accuracy and that the stopping rule further increases speed. We also present the analysis of array CGH data from a breast cancer cell line to show the impact of the new approaches on the analysis of real data. Availability: An R (R Development Core Team, 2006) version of the CBS algorithm has been implemented in the ``DNAcopy'' package of the Bioconductor project (Gentleman {\it et~al}, 2004). The proposed hybrid method for the $p$-value is available in version 1.2.1 or higher and the stopping rule for declaring a change early is available in version 1.5.1 or higher.
Resumo:
Among the many applications of microarray technology, one of the most popular is the identification of genes that are differentially expressed in two conditions. A common statistical approach is to quantify the interest of each gene with a p-value, adjust these p-values for multiple comparisons, chose an appropriate cut-off, and create a list of candidate genes. This approach has been criticized for ignoring biological knowledge regarding how genes work together. Recently a series of methods, that do incorporate biological knowledge, have been proposed. However, many of these methods seem overly complicated. Furthermore, the most popular method, Gene Set Enrichment Analysis (GSEA), is based on a statistical test known for its lack of sensitivity. In this paper we compare the performance of a simple alternative to GSEA.We find that this simple solution clearly outperforms GSEA.We demonstrate this with eight different microarray datasets.
Resumo:
OBJECTIVE: Maintenance of good walking speed is essential to independent living. People with musculoskeletal disease often have reduced walking speed. We investigated determinants of slower walking, other than musculoskeletal disease, that might provide valuable additional targets for therapy. METHODS: We analyzed data from the Somerset and Avon Survey of Health, a community based survey of people aged over 35 years. A total of 2703 participants who reported hip or knee pain at baseline (1994/1995) were studied, and reassessed in 2002-2003; 1696 were available for followup, and walking speed was tested in 1074. Walking speed (m/s) was used as outcome measure. Baseline characteristics, including comorbidities and socioeconomic factors, were tested for their ability to predict reduced walking speed using multiple linear regression analysis. RESULTS: Age, female sex, and immobility at baseline were predictive of slower walking speed. Other independent risk factors included the presence of cataract, low socioeconomic status, intermittent claudication, and other cardiovascular conditions. Having a cataract was associated with a decrease of 0.10 m/s (95% CI 0.03, 0.16). Those in social class V had a walking speed 0.22 m/s (95% CI 0.126, 0.31) slower than those in social class I. CONCLUSION: Comorbidities, age, female sex, and lower socioeconomic position determine walking speed in people with joint pain. Issues such as poor vision and social-economic disadvantage may add to the effect of musculoskeletal disease, suggesting the need for a holistic approach to management of these patients.
Resumo:
When a single brush-less dc motor is fed by an inverter with a sensor-less algorithm embedded in the switching controller, the system exhibits a linear and stable output in terms of the speed and torque. However, with two motors modulated by the same inverter, the system is unstable and rendered useless for a steady application, unless provided with some resistive damping on the supply lines. The project discusses and analysis the stability of such a system through simulations and hardware demonstrations and also will discuss a method to derive the values of these damping.