915 resultados para Speed enforcement
Resumo:
Primary voice production occurs in the larynx through vibrational movements carried out by vocal folds. However, many problems can affect this complex system resulting in voice disorders. In this context, time-frequency-shape analysis based on embedding phase space plots and nonlinear dynamics methods have been used to evaluate the vocal fold dynamics during phonation. For this purpose, the present work used high-speed video to record the vocal fold movements of three subjects and extract the glottal area time series using an image segmentation algorithm. This signal is used for an optimization method which combines genetic algorithms and a quasi-Newton method to optimize the parameters of a biomechanical model of vocal folds based on lumped elements (masses, springs and dampers). After optimization, this model is capable of simulating the dynamics of recorded vocal folds and their glottal pulse. Bifurcation diagrams and phase space analysis were used to evaluate the behavior of this deterministic system in different circumstances. The results showed that this methodology can be used to extract some physiological parameters of vocal folds and reproduce some complex behaviors of these structures contributing to the scientific and clinical evaluation of voice production. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
OBJECTIVE: Sepsis is a common condition encountered in hospital environments. There is no effective treatment for sepsis, and it remains an important cause of death at intensive care units. This study aimed to discuss some methods that are available in clinics, and tests that have been recently developed for the diagnosis of sepsis. METHODS: A systematic review was performed through the analysis of the following descriptors: sepsis, diagnostic methods, biological markers, and cytokines. RESULTS: The deleterious effects of sepsis are caused by an imbalance between the invasiveness of the pathogen and the ability of the host to mount an effective immune response. Consequently, the host's immune surveillance fails to eliminate the pathogen, allowing it to spread. Moreover, there is a pro-inflammatory mediator release, inappropriate activation of the coagulation and complement cascades, leading to dysfunction of multiple organs and systems. The difficulty achieve total recovery of the patient is explainable. There is an increased incidence of sepsis worldwide due to factors such as aging population, larger number of surgeries, and number of microorganisms resistant to existing antibiotics. CONCLUSION: The search for new diagnostic markers associated with increased risk of sepsis development and molecules that can be correlated to certain steps of sepsis is becoming necessary. This would allow for earlier diagnosis, facilitate patient prognosis characterization, and prediction of possible evolution of each case. All other markers are regrettably constrained to research units.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Several diagnostic techniques are presented for the detection of electrical fault in induction motor variable speed drives. These techinques are developed taking into account the impact of the control system on machine variables and non stationary operating conditions.
Resumo:
This book is dedicated to the Law and Economics analysis of civil liability of securities underwriters for the damage caused by material misstatements of corporate information by securities issuers. It seeks to answer a series of important questions. Who the are underwriters and what is their main role in the securities offering? Why there is a need for legal intervention in the underwriting market? What is so special about civil liability as an enforcement tool? How is civil liability used in a real world and does it really reach its goals? Finally, is there a need for a change and, if so, by what means?
Resumo:
The subject of the presented thesis is the accurate measurement of time dilation, aiming at a quantitative test of special relativity. By means of laser spectroscopy, the relativistic Doppler shifts of a clock transition in the metastable triplet spectrum of ^7Li^+ are simultaneously measured with and against the direction of motion of the ions. By employing saturation or optical double resonance spectroscopy, the Doppler broadening as caused by the ions' velocity distribution is eliminated. From these shifts both time dilation as well as the ion velocity can be extracted with high accuracy allowing for a test of the predictions of special relativity. A diode laser and a frequency-doubled titanium sapphire laser were set up for antiparallel and parallel excitation of the ions, respectively. To achieve a robust control of the laser frequencies required for the beam times, a redundant system of frequency standards consisting of a rubidium spectrometer, an iodine spectrometer, and a frequency comb was developed. At the experimental section of the ESR, an automated laser beam guiding system for exact control of polarisation, beam profile, and overlap with the ion beam, as well as a fluorescence detection system were built up. During the first experiments, the production, acceleration and lifetime of the metastable ions at the GSI heavy ion facility were investigated for the first time. The characterisation of the ion beam allowed for the first time to measure its velocity directly via the Doppler effect, which resulted in a new improved calibration of the electron cooler. In the following step the first sub-Doppler spectroscopy signals from an ion beam at 33.8 %c could be recorded. The unprecedented accuracy in such experiments allowed to derive a new upper bound for possible higher-order deviations from special relativity. Moreover future measurements with the experimental setup developed in this thesis have the potential to improve the sensitivity to low-order deviations by at least one order of magnitude compared to previous experiments; and will thus lead to a further contribution to the test of the standard model.
Resumo:
This thesis is a collection of essays about the instrumental use of commitment decisions to facilitate the completion of the European internal electricity market. European policy can shape markets in many ways, two most evident being regulation and competition enforcement. The interplay between these two instruments attracts a lot of scholarly attention. One of the major concerns in the competition vs. regulation debate is the instrumental use of competition rules. It has been observed that competition enforcement is triggered not only as a response to an anticompetitive harm occurring in the market, but that it sometimes becomes a powerful tool in the European Commission’s hands to pursue regulatory goals. This thesis looks for examples of such instrumentalisation in the context of electricity markets and finds that the Commission is very pragmatic in using all the possible instruments it has at hand to push forward its project of creating the internal electricity market. This includes regulation, competition enforcement and all sorts of political pressure. To the extent that commitment decisions accelerate sector-specific regulation and overcome political deadlocks, they contribute to the Commission’s energy policy goals. However, instrumentalisation of competition rules comes at a certain cost to competition policy, energy policy and, most importantly, to electricity markets themselves. Markets might be negatively affected either indirectly, by application of sector-specific regulation or competition policy building on previous commitment decisions, or directly, through the implementation of inadequate commitments in individual cases. Concluding, commitment decisions generally contributed to achieving the policy objectives of the internal electricity market, but their use for that purpose does not come without cost. Given that this cost is ultimately borne by the internal electricity market, the Commission should take a more balanced approach to the instrumental use of commitment decisions so that it does not do more harm than good.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
L'elimodellismo è una passione che lega un numero sempre maggiore di persone: nuove manifestazioni vengono organizzate in tutto il mondo e nuove discipline vengono continuamente proposte. Questo è il caso della disciplina speed in cui i piloti si sfidano a far volare i propri elimodelli alle massime velocità. L'azienda SAB Heli Division s.r.l., come produttore di pale per elimodelli e della serie di elicotteri Goblin, ha interesse a sostenere i propri piloti con le proprie macchine, facendo sì che siano veloci e competitive. Per questo ha voluto sviluppare una pala che, montata sul proprio elicottero specifico per questa disciplina, possa vincere la concorrenza con l'ambizione di stabilire un primato di velocità a livello internazionale. Il problema è quindi quello di sviluppare una pala che ottimizzasse al meglio le caratteristiche dell'elimodello Goblin Speed, in modo da sfruttare al meglio la potenza installata a bordo. Per via dei limiti sui mezzi a disposizione l'ottimizzazione è stata portata avanti mediante la teoria dell'elemento di pala. Si è impostato il calcolo determinando la potenza media su una rotazione del rotore in volo avanzato a 270 km/h e quindi attraverso gli algoritmi di ottimizzazione globale presenti nel codice di calcolo MATLAB si è cercato il rotore che permettesse il volo a tale velocità al variare del raggio del disco del rotore, dello svergolamento della pala e della distribuzione di corda lungo la pala. Per far sì che si abbiano risultati più precisi si sono sfruttati alcuni modelli per stimare il campo di velocità indotta o gli effetti dello stallo dinamico. Inoltre sono state stimate altre grandezze di cui non sono noti i dati reali o di cui è troppo complesso, per le conoscenze a disposizione, avere un dato preciso. Si è tuttavia cercato di avere stime verosimili. Alcune di queste grandezze sono le caratteristiche aerodinamiche del profilo NACA 0012 utilizzato, ottenute mediante analisi CFD bidimensionale, i comandi di passo collettivo e ciclico che equilibrano il velivolo e la resistenza aerodinamica dell'intero elimodello. I risultati del calcolo sono stati confrontati innanzitutto con le soluzioni già adottate dall'azienda. Quindi si è proceduto alla realizzazione della pala e mediante test di volo si è cercato di valutare le prestazioni della macchina che monta la pala ottenuta. Nonostante le approssimazioni adottate si è osservato che la pala progettata a partire dai risultati dell'ottimizzazione rispecchia la filosofia adottata: per velocità paragonabili a quelle ottenute con le pale prodotte da SAB Heli Division, le potenze richieste sono effettivamente inferiori. Tuttavia non è stato possibile ottenere un vero e proprio miglioramento della velocità di volo, presumibilmente a causa delle stime delle caratteristiche aerodinamiche delle diverse parti del Goblin Speed.
Resumo:
The aim of this study was to examine whether a real high speed-short term competition influences clinicopathological data focusing on muscle enzymes, iron profile and Acute Phase Proteins. 30 Thoroughbred racing horses (15 geldings and 15 females) aged between 4-12 years (mean 7 years), were used for the study. All the animals performed a high speed-short term competition for a total distance of 154 m in about 12 seconds, repeated 8 times, within approximately one hour (Niballo Horse Race). Blood samples were obtained 24 hours before and within 30 minutes after the end of the races. On all samples were performed a complete blood count (CBC), biochemical and haemostatic profiles. The post-race concentrations for the single parameter were corrected using an estimation of the plasma volume contraction according to the individual Alb concentration. Data were analysed with descriptive statistics and the percentage of variation from the baseline values were recorded. Pre- and post-race results were compared with non-parametric statistics (Mann Whitney U test). A difference was considered significant at p<0.05. A significant plasma volume contraction after the race was detected (Hct, Alb; p<0.01). Other relevant findings were increased concentrations of muscular enzymes (CK, LDH; p<0.01), Crt (p<0.01), significant increased uric acid (p<0.01), a significant decrease of haptoglobin (p<0.01) associated to an increase of ferritin concentrations (p<0.01), significant decrease of fibrinogen (p<0.05) accompanied by a non-significant increase of D-Dimers concentrations (p=0.08). This competition produced relevant abnormalities on clinical pathology in galloping horses. This study confirms a significant muscular damage, oxidative stress, intravascular haemolysis and subclinical hemostatic alterations. Further studies are needed to better understand the pathogenesis, the medical relevance and the impact on performance of these alterations in equine sport medicine.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
The recent financial crisis triggered an increasing demand for financial regulation to counteract the potential negative economic effects of the evermore complex operations and instruments available on financial markets. As a result, insider trading regulation counts amongst the relatively recent but particularly active regulation battles in Europe and overseas. Claims for more transparency and equitable securities markets proliferate, ranging from concerns about investor protection to global market stability. The internationalization of the world’s securities market has challenged traditional notions of regulation and enforcement. Considering that insider trading is currently forbidden all over Europe, this study follows a law and economics approach in identifying how this prohibition should be enforced. More precisely, the study investigates first whether criminal law is necessary under all circumstances to enforce insider trading; second, if it should be introduced at EU level. This study provides evidence of law and economics theoretical logic underlying the legal mechanisms that guide sanctioning and public enforcement of the insider trading prohibition by identifying optimal forms, natures and types of sanctions that effectively induce insider trading deterrence. The analysis further aims to reveal the economic rationality that drives the potential need for harmonization of criminal enforcement of insider trading laws within the European environment by proceeding to a comparative analysis of the current legislations of height selected Member States. This work also assesses the European Union’s most recent initiative through a critical analysis of the proposal for a Directive on criminal sanctions for Market Abuse. Based on the conclusions drawn from its close analysis, the study takes on the challenge of analyzing whether or not the actual European public enforcement of the laws prohibiting insider trading is coherent with the theoretical law and economics recommendations, and how these enforcement practices could be improved.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.