922 resultados para Code compression
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.
Resumo:
In the present work, a multi physics simulation of an innovative safety system for light water nuclear reactor is performed, with the aim to increase the reliability of its main decay heat removal system. The system studied, denoted by the acronym PERSEO (in Pool Energy Removal System for Emergency Operation) is able to remove the decay power from the primary side of the light water nuclear reactor through a heat suppression pool. The experimental facility, located at SIET laboratories (PIACENZA), is an evolution of the Thermal Valve concept where the triggering valve is installed liquid side, on a line connecting two pools at the bottom. During the normal operation, the valve is closed, while in emergency conditions it opens, the heat exchanger is flooded with consequent heat transfer from the primary side to the pool side. In order to verify the correct system behavior during long term accidental transient, two main experimental PERSEO tests are analyzed. For this purpose, a coupling between the mono dimensional system code CATHARE, which reproduces the system scale behavior, with a three-dimensional CFD code NEPTUNE CFD, allowing a full investigation of the pools and the injector, is implemented. The coupling between the two codes is realized through the boundary conditions. In a first analysis, the facility is simulated by the system code CATHARE V2.5 to validate the results with the experimental data. The comparison of the numerical results obtained shows a different void distribution during the boiling conditions inside the heat suppression pool for the two cases of single nodalization and three volume nodalization scheme of the pool. Finaly, to improve the investigation capability of the void distribution inside the pool and the temperature stratification phenomena below the injector, a two and three dimensional CFD models with a simplified geometry of the system are adopted.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
In questa tesi viene studiata la libreria zbar.h il cui compito è decodificare i barcode presenti in immagini. Per acquisire le immagini si utilizzano funzioni contenute nella libreria OpenCV. Successivamente viene creata un'interfaccia tra OpenCV e ZBar. Vengono effettuati alcuni test per verificare l'efficienza sia di ZBar e sia dell'interfaccia. Concludendo, si crea una nuova libreria in cui sono inglobate le funzioni di ZBar e l'interfaccia OpenCV-ZBar
Resumo:
This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.
Resumo:
We have extended the Boltzmann code CLASS and studied a specific scalar tensor dark energy model: Induced Gravity
Resumo:
This thesis reports a study on the seismic response of two-dimensional squat elements and their effect on the behavior of building structures. Part A is devoted to the study of unreinforced masonry infills, while part B is focused on reinforced concrete sandwich walls. Part A begins with a comprehensive review of modelling techniques and code provisions for infilled frame structures. Then state-of-the practice techniques are applied for a real case to test the ability of actual modeling techniques to reproduce observed behaviors. The first developments towards a seismic-resistant masonry infill system are presented. Preliminary design recommendations for the seismic design of the seismic-resistant masonry infill are finally provided. Part B is focused on the seismic behavior of a specific reinforced concrete sandwich panel system. First, the results of in-plane psuudostatic cyclic tests are described. Refinements to the conventional modified compression field theory are introduced in order to better simulate the monotonic envelope of the cyclic response. The refinements deal with the constitutive model for the shotcrete in tension and the embedded bars. Then the hysteretic response of the panels is studied according to a continuum damage model. Damage state limits are identified. Design recommendations for the seismic design of the studied reinforced concrete sandwich walls are finally provided.
Resumo:
La determinazione del modulo di Young è fondamentale nello studio della propagazione di fratture prima del rilascio di una valanga e per lo sviluppo di affidabili modelli di stabilità della neve. Il confronto tra simulazioni numeriche del modulo di Young e i valori sperimentali mostra che questi ultimi sono tre ordini di grandezza inferiori a quelli simulati (Reuter et al. 2013). Lo scopo di questo lavoro è stimare il modulo di elasticità studiando la dipendenza dalla frequenza della risposta di diversi tipi di neve a bassa densità, 140-280 kg m-3. Ciò è stato fatto applicando una compressione dinamica uniassiale a -15°C nel range 1-250 Hz utilizzando il Young's modulus device (YMD), prototipo di cycling loading device progettato all'Istituto per lo studio della neve e delle valanghe (SLF). Una risposta viscoelastica della neve è stata identificata a tutte le frequenze considerate, la teoria della viscoelasticità è stata applicata assumendo valida l'ipotesi di risposta lineare della neve. Il valore dello storage modulus, E', a 100 Hz è stato identificato come ragionevolmente rappresentativo del modulo di Young di ciascun campione neve. Il comportamento viscoso è stato valutato considerando la loss tangent e la viscosità ricavata dai modelli di Voigt e Maxwell. Il passaggio da un comportamento più viscoso ad uno più elastico è stato trovato a 40 Hz (~1.1•10-2 s-1). Il maggior contributo alla dissipazione è nel range 1-10 Hz. Infine, le simulazioni numeriche del modulo di Young sono state ottenute nello stesso modo di Reuter et al.. La differenza tra le simulazioni ed i valori sperimentali di E' sono, al massimo, di un fattore 5; invece, in Reuter et al., era di 3 ordini di grandezza. Pertanto, i nostri valori sperimentali e numerici corrispondono meglio, indicando che il metodo qui utilizzato ha portato ad un miglioramento significativo.
Resumo:
La tesi inserita in un periodo di forte transizione dai sistemi Onpremises a sistemi Cloud ha avuto l'esigenza di risolvere alcune problematiche legate alla definizione delle infrastrutture. Come poter scalare le risorse all'evenienza ricreando gli stessi ambienti, monitorandoli e mettendo in sicurezza i dati critici delle applicazioni? La tesi ha risposto proprio a questa domanda definendo un nuovo paradigma nel concepire le infrastrutture chiamato Infrastructure as Code. La tesi ha approfondito le pratiche e le metodologie che si sono legate maggiormente all'Infrastructure as Code tra le quali Version Control, Configuration Management, Continuous Integration e Continuous Delivery. La tesi inoltre ha previsto la realizzazione di un prototipo finale nato dallo studio del flusso di sviluppo software aziendale, definendo gli ambienti in accordo ai sistemi di Version Control e Configuration Management, applicando pratiche di integrazione continua per giungere ad una deployment pipeline funzionale.
Resumo:
Trauma or degenerative diseases such as osteonecrosis may determine bone loss whose recover is promised by a "tissue engineering“ approach. This strategy involves the use of stem cells, grown onboard of adequate biocompatible/bioreabsorbable hosting templates (usually defined as scaffolds) and cultured in specific dynamic environments afforded by differentiation-inducing actuators (usually defined as bioreactors) to produce implantable tissue constructs. The purpose of this thesis is to evaluate, by finite element modeling of flow/compression-induced deformation, alginate scaffolds intended for bone tissue engineering. This work was conducted at the Biomechanics Laboratory of the Institute of Biomedical and Neural Engineering of the Reykjavik University of Iceland. In this respect, Comsol Multiphysics 5.1 simulations were carried out to approximate the loads over alginate 3D matrices under perfusion, compression and perfusion+compression, when varyingalginate pore size and flow/compression regimen. The results of the simulations show that the shear forces in the matrix of the scaffold increase coherently with the increase in flow and load, and decrease with the increase of the pore size. Flow and load rates suggested for proper osteogenic cell differentiation are reported.
Resumo:
OBJECTIVE: To evaluate the ease of application of two-piece, graduated, compression systems for the treatment of venous ulcers. METHODS: Four kits used to provide limb compression in the management of venous ulcers were evaluated. These have been proven to be non-inferior to various types of bandages in clinical trials. The interface pressure exerted above the ankle by the under-stocking and the complete compression system and the force required to pull the over-stocking off were assessed in vitro. Ease of application of the four kits was evaluated in four sessions by five nurses who put stockings on their own legs in a blinded manner. They expressed their assessment of the stockings using a series of visual analogue scales (VASs). RESULTS: The Sigvaris Ulcer X((R)) kit provided a mean interface pressure of 46 mmHg and required a force in the range of 60-90 N to remove it. The Mediven((R)) ulcer kit exerted the same pressure but required force in the range of 150-190 N to remove it. Two kits (SurePress((R)) Comfort and VenoTrain((R)) Ulcertec) exerted a mean pressure of only 25 mmHg and needed a force in the range of 100-160 N to remove them. Nurses judged the Ulcer X and SurePress kits easiest to apply. Application of the VenoTrain kit was found slightly more difficult. The Mediven kit was judged to be difficult to use. CONCLUSIONS: Comparison of ease of application of compression-stocking kits in normal legs revealed marked differences between them. Only one system exerted a high pressure and was easy to apply. Direct comparison of these compression kits in leg-ulcer patients is required to assess whether our laboratory findings correlate with patient compliance and ulcer healing.
Resumo:
OBJECTIVE: To compare the proportion and rate of healing, pain, and quality of life of low-strength medical compression stockings (MCS) with traditional bandages applied for the treatment of recalcitrant venous leg ulcers. METHODS: A single-center, randomized, open-label study was performed with consecutive patients. Sigvaris prototype MCS providing 15 mm Hg-25 mm Hg at the ankle were compared with multi-layer short-stretch bandages. In both groups, pads were placed above incompetent perforating veins in the ulcer area. The initial static pressure between the dressing-covered ulcer and the pad was 29 mm Hg and 49 mm Hg with MCS and bandages, respectively. Dynamic pressure measurements showed no difference. Compression was maintained day and night and changed every week. The primary endpoint was healing within 90 days. Secondary endpoints were healing within 180 days, time to healing, pain (weekly Likert scales), and monthly quality of life (ChronIc Venous Insufficiency Quality of Life [CIVIQ] questionnaire). RESULTS: Of 74 patients screened, 60 fulfilled the selection criteria and 55 completed the study; 28 in the MCS and 27 in the bandage group. Ulcers were recurrent (48%), long lasting (mean, 27 months), and large (mean, 13 cm2). All but one patient had deep venous reflux and/or incompetent perforating veins in addition to trunk varices. Characteristics of patients and ulcers were evenly distributed (exception: more edema in the MCS group; P = .019). Healing within 90 days was observed in 36% with MCS and in 48% with bandages (P = .350). Healing within 180 days was documented in 50% with MCS and in 67% with bandages (P = .210). Time to healing was identical. Pain scored 44 and 46 initially (on a scale in which 100 referred to maximum and 0 to no pain) and decreased within the first week to 20 and 28 in the MCS and bandage groups, respectively (P < .001 vs .010). Quality of life showed no difference between the treatment groups. In both groups, pain at 90 days had decreased by half, independent of completion of healing. Physical, social, and psychic impairment improved significantly in patients with healed ulcers only. CONCLUSION: Our study illustrates the difficulty of bringing large and long-standing venous ulcers to heal. The effect of compression with MCS was not different from that of compression with bandages. Both treatments alleviated pain promptly. Quality of life was improved only in patients whose ulcers had healed.
Resumo:
The purpose was to investigate the in vivo effects of unloading and compression on T1-Gd relaxation times in healthy articular knee cartilage.