938 resultados para Linux (Operating system)
Resumo:
This work presents the application of Linear Matrix Inequalities to the robust and optimal adjustment of Power System Stabilizers with pre-defined structure. Results of some tests show that gain and zeros adjustments are sufficient to guarantee robust stability and performance with respect to various operating points. Making use of the flexible structure of LMI's, we propose an algorithm that minimizes the norm of the controllers gain matrix while it guarantees the damping factor specified for the closed loop system, always using a controller with flexible structure. The technique used here is the pole placement, whose objective is to place the poles of the closed loop system in a specific region of the complex plane. Results of tests with a nine-machine system are presented and discussed, in order to validate the algorithm proposed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
[EN] This article presents a practical case of a pervasive multimedia guidance system for public transport passengers. In order to provide useful information to passengers, the system is capable of operating and adapting spontaneously to the different parts of a public transport network, using local data communication technologies. The multimedia data provided by the system are highly accessible, and adapt to the passengers' preferences, and are consequently suitable for special needs passengers. To this end, a paradigm of pervasive computing has been applied.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
La necessità di sincronizzare i propri dati si presenta in una moltitudine di situazioni, infatti il numero di dispositivi informatici a nostra disposizione è in continua crescita e, all' aumentare del loro numero, cresce l' esigenza di mantenere aggiornate le multiple copie dei dati in essi memorizzati. Vi sono diversi fattori che complicano tale situazione, tra questi la varietà sempre maggiore dei sistemi operativi utilizzati nei diversi dispositivi, si parla di Microsoft Windows, delle tante distribuzioni Linux, di Mac OS X, di Solaris o di altri sistemi operativi UNIX, senza contare i sistemi operativi più orientati al settore mobile come Android. Ogni sistema operativo ha inoltre un modo particolare di gestire i dati, si pensi alla differente gestione dei permessi dei file o alla sensibilità alle maiuscole. Bisogna anche considerare che se gli aggiornamenti dei dati avvenissero soltanto su di uno di questi dispositivi sarebbe richiesta una semplice copia dei dati aggiornati sugli altri dispositivi, ma che non è sempre possibile utilizzare tale approccio. Infatti i dati vengono spesso aggiornati in maniera indipendente in più di un dispositivo, magari nello stesso momento, è pertanto necessario che le applicazioni che si occupano di sincronizzare tali dati riconoscano le situazioni di conflitto, nelle quali gli stessi dati sono stati aggiornati in più di una copia ed in maniera differente, e permettano di risolverle, uniformando lo stato delle repliche. Considerando l' importanza e il valore che possono avere i dati, sia a livello lavorativo che personale, è necessario che tali applicazioni possano garantirne la sicurezza, evitando in ogni caso un loro danneggiamento, perchè sempre più spesso il valore di un dispositivo dipende più dai dati in esso contenuti che dal costo dello hardware. In questa tesi verranno illustrate alcune idee alternative su come possa aver luogo la condivisione e la sincronizzazione di dati tra sistemi operativi diversi, sia nel caso in cui siano installati nello stesso dispositivo che tra dispositivi differenti. La prima parte della tesi descriverà nel dettaglio l' applicativo Unison. Tale applicazione, consente di mantenere sincronizzate tra di loro repliche dei dati, memorizzate in diversi dispositivi che possono anche eseguire sistemi operativi differenti. Unison funziona a livello utente, analizzando separatamente lo stato delle repliche al momento dell' esecuzione, senza cioè mantenere traccia delle operazioni che sono state effettuate sui dati per modificarli dal loro stato precedente a quello attuale. Unison permette la sincronizzazione anche quando i dati siano stati modificati in maniera indipendente su più di un dispositivo, occupandosi di risolvere gli eventuali conflitti che possono verificarsi rispettando la volontà dell' utente. Verranno messe in evidenza le strategie utilizzate dai suoi ideatori per garantire la sicurezza dei dati ad esso affidati e come queste abbiano effetto nelle più diverse condizioni. Verrà poi fornita un' analisi dettagiata di come possa essere utilizzata l' applicazione, fornendo una descrizione accurata delle funzionalità e vari esempi per renderne più chiaro il funzionamento. Nella seconda parte della tesi si descriverà invece come condividere file system tra sistemi operativi diversi all' interno della stessa macchina, si tratta di un approccio diametralmente opposto al precedente, in cui al posto di avere una singola copia dei dati, si manteneva una replica per ogni dispositivo coinvolto. Concentrando l' attenzione sui sistemi operativi Linux e Microsoft Windows verranno descritti approfonditamente gli strumenti utilizzati e illustrate le caratteristiche tecniche sottostanti.
Resumo:
In this thesis we focus on optimization and simulation techniques applied to solve strategic, tactical and operational problems rising in the healthcare sector. At first we present three applications to Emilia-Romagna Public Health System (SSR) developed in collaboration with Agenzia Sanitaria e Sociale dell'Emilia-Romagna (ASSR), a regional center for innovation and improvement in health. Agenzia launched a strategic campaign aimed at introducing Operations Research techniques as decision making tools to support technological and organizational innovations. The three applications focus on forecast and fund allocation of medical specialty positions, breast screening program extension and operating theater planning. The case studies exploit the potential of combinatorial optimization, discrete event simulation and system dynamics techniques to solve resource constrained problem arising within Emilia-Romagna territory. We then present an application in collaboration with Dipartimento di Epidemiologia del Lazio that focuses on population demand of service allocation to regional emergency departments. Finally, a simulation-optimization approach, developed in collaboration with INESC TECH center of Porto, to evaluate matching policies for the kidney exchange problem is discussed.
Resumo:
After a first theoric introduction about Business Process Re-engineering (BPR), are considered in particular the possible options found in literature regarding the following three macro-elements: the methodologies, the modelling notations and the tools employed for process mapping. The theoric section is the base for the analysis of the same elements into the specific case of Rosetti Marino S.p.A., an EPC contractor, operating in the Oil&Gas industry. Rosetti Marino implemented a tool developped internally in order to satisfy its needs in the most suitable way possible and buit a Map of all business processes,navigable on the Company Intranet. Moreover it adopted a methodology based upon participation, interfunctional communication and sharing. The GIGA introduction is analysed from a structural, human resources, political and symbolic point of view.
Fault detection, diagnosis and active fault tolerant control for a satellite attitude control system
Resumo:
Modern control systems are becoming more and more complex and control algorithms more and more sophisticated. Consequently, Fault Detection and Diagnosis (FDD) and Fault Tolerant Control (FTC) have gained central importance over the past decades, due to the increasing requirements of availability, cost efficiency, reliability and operating safety. This thesis deals with the FDD and FTC problems in a spacecraft Attitude Determination and Control System (ADCS). Firstly, the detailed nonlinear models of the spacecraft attitude dynamics and kinematics are described, along with the dynamic models of the actuators and main external disturbance sources. The considered ADCS is composed of an array of four redundant reaction wheels. A set of sensors provides satellite angular velocity, attitude and flywheel spin rate information. Then, general overviews of the Fault Detection and Isolation (FDI), Fault Estimation (FE) and Fault Tolerant Control (FTC) problems are presented, and the design and implementation of a novel diagnosis system is described. The system consists of a FDI module composed of properly organized model-based residual filters, exploiting the available input and output information for the detection and localization of an occurred fault. A proper fault mapping procedure and the nonlinear geometric approach are exploited to design residual filters explicitly decoupled from the external aerodynamic disturbance and sensitive to specific sets of faults. The subsequent use of suitable adaptive FE algorithms, based on the exploitation of radial basis function neural networks, allows to obtain accurate fault estimations. Finally, this estimation is actively exploited in a FTC scheme to achieve a suitable fault accommodation and guarantee the desired control performances. A standard sliding mode controller is implemented for attitude stabilization and control. Several simulation results are given to highlight the performances of the overall designed system in case of different types of faults affecting the ADCS actuators and sensors.
Night Vision Imaging System (NVIS) certification requirements analysis of an Airbus Helicopters H135
Resumo:
The safe operation of nighttime flight missions would be enhanced using Night Vision Imaging Systems (NVIS) equipment. This has been clear to the military since 1970s and to the civil helicopters since 1990s. In these last months, even Italian Emergency Medical Service (EMS) operators require Night Vision Goggles (NVG) devices that therefore amplify the ambient light. In order to fly with this technology, helicopters have to be NVIS-approved. The author have supported a company, to quantify the potentiality of undertaking the certification activity, through a feasibility study. Even before, NVG description and working principles have been done, then specifications analysis about the processes to make a helicopter NVIS-approved has been addressed. The noteworthy difference between military specifications and the civilian ones highlights non-irrevelant lacks in the latter. The activity of NVIS certification could be a good investment because the following targets have been achieved: Reductions of the certification cost, of the operating time and of the number of non-compliance.
Resumo:
In this paper we propose a new system that allows reliable acetabular cup placement when the THA is operated in lateral approach. Conceptually it combines the accuracy of computer-generated patient-specific morphology information with an easy-to-use mechanical guide, which effectively uses natural gravity as the angular reference. The former is achieved by using a statistical shape model-based 2D-3D reconstruction technique that can generate a scaled, patient-specific 3D shape model of the pelvis from a single conventional anteroposterior (AP) pelvic X-ray radiograph. The reconstructed 3D shape model facilitates a reliable and accurate co-registration of the mechanical guide with the patient’s anatomy in the operating theater. We validated the accuracy of our system by conducting experiments on placing seven cups to four pelvises with different morphologies. Taking the measurements from an image-free navigation system as the ground truth, our system showed an average accuracy of 2.1 ±0.7 o for inclination and an average accuracy of 1.2 ±1.4 o for anteversion.
Resumo:
Studying liquid fuel combustion is necessary to better design combustion systems. Through more efficient combustors and alternative fuels, it is possible to reduce greenhouse gases and harmful emissions. In particular, coal-derived and Fischer-Tropsch liquid fuels are of interest because, in addition to producing fewer emissions, they have the potential to drastically reduce the United States' dependence on foreign oil. Major academic research institutions like the Pennsylvania State University perform cutting-edge research in many areas of combustion. The Combustion Research Laboratory (CRL) at Bucknell University is striving to develop the necessary equipment to be capable of both independent and collaborative research efforts with Penn State and in the process, advance the CRL to the forefront of combustion studies. The focus of this thesis is to advance the capabilities of the Combustion Research Lab at Bucknell. Specifically, this was accomplished through a revision to a previously designed liquid fuel injector, and through the design and installation of a laser extinction system for the measurement of soot produced during combustion. The previous liquid fuel injector with a 0.005" hole did not behave as expected. Through spray testing the 0.005" injector with water, it was determined that experimental errors were made in the original pressure testing of the injector. Using data from the spray testing experiment, new theoretical hole sizes of the injector were calculated. New injectors with 0.007" and 0.0085" orifices were fabricated and subsequently tested to qualitatively validate their behavior. The injectors were installed in the combustion rig in the CRL and hot-fire tested with liquid heptane. The 0.0085" injector yielded a manageable fuel pressure and produced a broad flame. A laser extinction system was designed and installed in the CRL. This involved the fabrication of a number of custom-designed parts and the specification of laser extinction equipment for purchase. A standard operating procedure for the laser extinction system was developed to provide a consistent, safe method for measuring soot formation during combustion.
Resumo:
OBJECTIVE: To assess the relationship between early laboratory parameters, disease severity, type of management (surgical or conservative) and outcome in necrotizing enterocolitis (NEC). STUDY DESIGN: Retrospective collection and analysis of data from infants treated in a single tertiary care center (1980 to 2002). Data were collected on disease severity (Bell stage), birth weight (BW), gestational age (GA) and pre-intervention laboratory parameters (leukocyte and platelet counts, hemoglobin, lactate, C-reactive protein). RESULTS: Data from 128 infants were sufficient for analysis. Factors significantly associated with survival were Bell stage (P<0.05), lactate (P<0.05), BW and GA (P<0.01, P<0.001, respectively). From receiver operating characteristics curves, the highest predictive value resulted from a score with 0 to 8 points combining BW, Bell stage, lactate and platelet count (P<0.001). At a cutoff level of 4.5 sensitivity and specificity for predicting survival were 0.71 and 0.72, respectively. CONCLUSION: Some single parameters were associated with poor outcome in NEC. Optimal risk stratification was achieved by combining several parameters in a score.
Resumo:
CONCLUSION: Our self-developed planning and navigation system has proven its capacity for accurate surgery on the anterior and lateral skull base. With the incorporation of augmented reality, image-guided surgery will evolve into 'information-guided surgery'. OBJECTIVE: Microscopic or endoscopic skull base surgery is technically demanding and its outcome has a great impact on a patient's quality of life. The goal of the project was aimed at developing and evaluating enabling navigation surgery tools for simulation, planning, training, education, and performance. This clinically applied technological research was complemented by a series of patients (n=406) who were treated by anterior and lateral skull base procedures between 1997 and 2006. MATERIALS AND METHODS: Optical tracking technology was used for positional sensing of instruments. A newly designed dynamic reference base with specific registration techniques using fine needle pointer or ultrasound enables the surgeon to work with a target error of < 1 mm. An automatic registration assessment method, which provides the user with a color-coded fused representation of CT and MR images, indicates to the surgeon the location and extent of registration (in)accuracy. Integration of a small tracker camera mounted directly on the microscope permits an advantageous ergonomic way of working in the operating room. Additionally, guidance information (augmented reality) from multimodal datasets (CT, MRI, angiography) can be overlaid directly onto the surgical microscope view. The virtual simulator as a training tool in endonasal and otological skull base surgery provides an understanding of the anatomy as well as preoperative practice using real patient data. RESULTS: Using our navigation system, no major complications occurred in spite of the fact that the series included difficult skull base procedures. An improved quality in the surgical outcome was identified compared with our control group without navigation and compared with the literature. The surgical time consumption was reduced and more minimally invasive approaches were possible. According to the participants' questionnaires, the educational effect of the virtual simulator in our residency program received a high ranking.
Resumo:
During endoscopic surgery, it is difficult to ascertain the anatomical landmarks once the anatomy is fiddled with or if the operating area is filled with blood. An augmented reality system will enhance the endoscopic view and further enable surgeons to view hidden critical structures or the results of preoperative planning.
Resumo:
In the current market system, power systems are operated at higher loads for economic reasons. Power system stability becomes a genuine concern in such operating conditions. In case of failure of any larger component, the system may become stressed. These events may start cascading failures, which may lead to blackouts. One of the main reasons of the major recorded blackout events has been the unavailability of system-wide information. Synchrophasor technology has the capability to provide system-wide real time information. Phasor Measurement Units (PMUs) are the basic building block of this technology, which provide the Global Positioning System (GPS) time-stamped voltage and current phasor values along with the frequency. It is being assumed that synchrophasor data of all the buses is available and thus the whole system is fully observable. This information can be used to initiate islanding or system separation to avoid blackouts. A system separation strategy using synchrophasor data has been developed to answer the three main aspects of system separation: (1) When to separate: One class support machines (OC-SVM) is primarily used for the anomaly detection. Here OC-SVM was used to detect wide area instability. OC-SVM has been tested on different stable and unstable cases and it is found that OC-SVM has the capability to detect the wide area instability and thus is capable to answer the question of “when the system should be separated”. (2) Where to separate: The agglomerative clustering technique was used to find the groups of coherent buses. The lines connecting different groups of coherent buses form the separation surface. The rate of change of the bus voltage phase angles has been used as the input to this technique. This technique has the potential to exactly identify the lines to be tripped for the system separation. (3) What to do after separation: Load shedding was performed approximately equal to the sum of power flows along the candidate system separation lines should be initiated before tripping these lines. Therefore it is recommended that load shedding should be initiated before tripping the lines for system separation.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.