941 resultados para standard package software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the biomechanical behavior of screwed partial fixed prosthesis supported by implants with different diameters (2.5 mm; 3.3 mm and 3.75 mm) by using a photoelastic analysis. Six photoelastic models were fabricated in PL-2 resin as single crowns or splinted 3-unit piece. Models were positioned in a circular polariscope and 100-N axial and oblique (45 degrees) loads were applied in the occlusal surface of the crowns by using a universal testing machine (EMIC). The stresses were photographically recorded and qualitatively analyzed using a software (Adobe Photoshop). Under axial loading, the number of fringes was inversely proportional to the diameter of the implants in the single crown models. In the splinted 3-unit piece, the 3.75-mm implant promoted lower number of fringes regardless of loading area application. Under oblique loading, a slight increase of fringes number was observed for all groups. The standard implant diameter promoted better stress distribution than the narrow and mini diameter implants. Additionally, the splinted crowns showed a more uniform stress distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The new Brazilian ABNT NBR 15575 Standard (the ―Standard‖) recommends two methods for analyzing housing thermal performance: a simplified and a computational simulation method. The aim of this paper is to evaluate both methods and the coherence between each. For this, the thermal performance of a low-cost single-family house was evaluated through the application of the procedures prescribed by the Standard. To accomplish this study, the EnergyPlus software was selected. Comparative analyses of the house with varying envelope U-values and solar absorptance of external walls were performed in order to evaluate the influence of these parameters on the results. The results have shown limitations in the current Standard computational simulation method, due to different aspects: weather files, lack of consideration of passive strategies, and inconsistency with the simplified method. Therefore, this research indicates that there are some aspects to be improved in this Standard, so it could better represent the real thermal performance of social housing in Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I moderni sistemi embedded sono equipaggiati con risorse hardware che consentono l’esecuzione di applicazioni molto complesse come il decoding audio e video. La progettazione di simili sistemi deve soddisfare due esigenze opposte. Da un lato è necessario fornire un elevato potenziale computazionale, dall’altro bisogna rispettare dei vincoli stringenti riguardo il consumo di energia. Uno dei trend più diffusi per rispondere a queste esigenze opposte è quello di integrare su uno stesso chip un numero elevato di processori caratterizzati da un design semplificato e da bassi consumi. Tuttavia, per sfruttare effettivamente il potenziale computazionale offerto da una batteria di processoriè necessario rivisitare pesantemente le metodologie di sviluppo delle applicazioni. Con l’avvento dei sistemi multi-processore su singolo chip (MPSoC) il parallel programming si è diffuso largamente anche in ambito embedded. Tuttavia, i progressi nel campo della programmazione parallela non hanno mantenuto il passo con la capacità di integrare hardware parallelo su un singolo chip. Oltre all’introduzione di multipli processori, la necessità di ridurre i consumi degli MPSoC comporta altre soluzioni architetturali che hanno l’effetto diretto di complicare lo sviluppo delle applicazioni. Il design del sottosistema di memoria, in particolare, è un problema critico. Integrare sul chip dei banchi di memoria consente dei tempi d’accesso molto brevi e dei consumi molto contenuti. Sfortunatamente, la quantità di memoria on-chip che può essere integrata in un MPSoC è molto limitata. Per questo motivo è necessario aggiungere dei banchi di memoria off-chip, che hanno una capacità molto maggiore, come maggiori sono i consumi e i tempi d’accesso. La maggior parte degli MPSoC attualmente in commercio destina una parte del budget di area all’implementazione di memorie cache e/o scratchpad. Le scratchpad (SPM) sono spesso preferite alle cache nei sistemi MPSoC embedded, per motivi di maggiore predicibilità, minore occupazione d’area e – soprattutto – minori consumi. Per contro, mentre l’uso delle cache è completamente trasparente al programmatore, le SPM devono essere esplicitamente gestite dall’applicazione. Esporre l’organizzazione della gerarchia di memoria ll’applicazione consente di sfruttarne in maniera efficiente i vantaggi (ridotti tempi d’accesso e consumi). Per contro, per ottenere questi benefici è necessario scrivere le applicazioni in maniera tale che i dati vengano partizionati e allocati sulle varie memorie in maniera opportuna. L’onere di questo compito complesso ricade ovviamente sul programmatore. Questo scenario descrive bene l’esigenza di modelli di programmazione e strumenti di supporto che semplifichino lo sviluppo di applicazioni parallele. In questa tesi viene presentato un framework per lo sviluppo di software per MPSoC embedded basato su OpenMP. OpenMP è uno standard di fatto per la programmazione di multiprocessori con memoria shared, caratterizzato da un semplice approccio alla parallelizzazione tramite annotazioni (direttive per il compilatore). La sua interfaccia di programmazione consente di esprimere in maniera naturale e molto efficiente il parallelismo a livello di loop, molto diffuso tra le applicazioni embedded di tipo signal processing e multimedia. OpenMP costituisce un ottimo punto di partenza per la definizione di un modello di programmazione per MPSoC, soprattutto per la sua semplicità d’uso. D’altra parte, per sfruttare in maniera efficiente il potenziale computazionale di un MPSoC è necessario rivisitare profondamente l’implementazione del supporto OpenMP sia nel compilatore che nell’ambiente di supporto a runtime. Tutti i costrutti per gestire il parallelismo, la suddivisione del lavoro e la sincronizzazione inter-processore comportano un costo in termini di overhead che deve essere minimizzato per non comprometterre i vantaggi della parallelizzazione. Questo può essere ottenuto soltanto tramite una accurata analisi delle caratteristiche hardware e l’individuazione dei potenziali colli di bottiglia nell’architettura. Una implementazione del task management, della sincronizzazione a barriera e della condivisione dei dati che sfrutti efficientemente le risorse hardware consente di ottenere elevate performance e scalabilità. La condivisione dei dati, nel modello OpenMP, merita particolare attenzione. In un modello a memoria condivisa le strutture dati (array, matrici) accedute dal programma sono fisicamente allocate su una unica risorsa di memoria raggiungibile da tutti i processori. Al crescere del numero di processori in un sistema, l’accesso concorrente ad una singola risorsa di memoria costituisce un evidente collo di bottiglia. Per alleviare la pressione sulle memorie e sul sistema di connessione vengono da noi studiate e proposte delle tecniche di partizionamento delle strutture dati. Queste tecniche richiedono che una singola entità di tipo array venga trattata nel programma come l’insieme di tanti sotto-array, ciascuno dei quali può essere fisicamente allocato su una risorsa di memoria differente. Dal punto di vista del programma, indirizzare un array partizionato richiede che ad ogni accesso vengano eseguite delle istruzioni per ri-calcolare l’indirizzo fisico di destinazione. Questo è chiaramente un compito lungo, complesso e soggetto ad errori. Per questo motivo, le nostre tecniche di partizionamento sono state integrate nella l’interfaccia di programmazione di OpenMP, che è stata significativamente estesa. Specificamente, delle nuove direttive e clausole consentono al programmatore di annotare i dati di tipo array che si vuole partizionare e allocare in maniera distribuita sulla gerarchia di memoria. Sono stati inoltre sviluppati degli strumenti di supporto che consentono di raccogliere informazioni di profiling sul pattern di accesso agli array. Queste informazioni vengono sfruttate dal nostro compilatore per allocare le partizioni sulle varie risorse di memoria rispettando una relazione di affinità tra il task e i dati. Più precisamente, i passi di allocazione nel nostro compilatore assegnano una determinata partizione alla memoria scratchpad locale al processore che ospita il task che effettua il numero maggiore di accessi alla stessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to investigate the impact of different hybridization concepts and levels of hybridization on fuel economy of a standard road vehicle where both conventional and non-conventional hybrid architectures are treated exactly in the same way from the point of view of overall energy flow optimization. Hybrid component models were developed and presented in detail as well as the simulations results mainly for NEDC cycle. The analysis was performed on four different parallel hybrid powertrain concepts: Hybrid Electric Vehicle (HEV), High Speed Flywheel Hybrid Vehicle (HSF-HV), Hydraulic Hybrid Vehicle (HHV) and Pneumatic Hybrid Vehicle (PHV). In order to perform equitable analysis of different hybrid systems, comparison was performed also on the basis of the same usable system energy storage capacity (i.e. 625kJ for HEV, HSF and the HHV) but in the case of pneumatic hybrid systems maximal storage capacity was limited by the size of the systems in order to comply with the packaging requirements of the vehicle. The simulations were performed within the IAV Gmbh - VeLoDyn software simulator based on Matlab / Simulink software package. Advanced cycle independent control strategy (ECMS) was implemented into the hybrid supervisory control unit in order to solve power management problem for all hybrid powertrain solutions. In order to maintain State of Charge within desired boundaries during different cycles and to facilitate easy implementation and recalibration of the control strategy for very different hybrid systems, Charge Sustaining Algorithm was added into the ECMS framework. Also, a Variable Shift Pattern VSP-ECMS algorithm was proposed as an extension of ECMS capabilities so as to include gear selection into the determination of minimal (energy) cost function of the hybrid system. Further, cycle-based energetic analysis was performed in all the simulated cases, and the results have been reported in the corresponding chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo di questo lavoro è quello di fornire una metodologia operativa, esposta sotto forma di modello organizzativo strutturato per casi, che le aziende possono utilizzare per definire le azioni immediate di risposta da intraprendere al verificarsi di un evento informatico di sicurezza, che potrebbe trasformarsi, come vedremo, in incidente informatico di sicurezza. La strutturazione di questo modello si basa principalmente su due standard prodotti dall'ISO/IEC ed appartenenti alla famiglia 27000, che delinea il sistema di gestione della sicurezza delle informazioni in azienda e che ha come scopo principale la protezione di riservatezza, integrità e disponibilità dei dati in azienda. Il contenuto di tali standard non può però prescindere dagli ordinamenti giuridici di ogni paese in cui vengono applicati, motivo per cui all'interno del lavoro sono stati integrati i riferimenti alle normative di rilevante interesse, soprattutto quelle collegate alla privacy e ai casi presi in esame all'interno del modello sviluppato. In prima battuta vengono quindi introdotti gli standard di riferimento, illustrati all'interno del Capitolo 1, proseguendo poi con la descrizione di concetti fondamentali per la strutturazione del modello organizzativo, come sicurezza informatica, incident response e informatica forense, che vengono esposti nel Capitolo 2. Nel Capitolo 3 vengono invece descritti gli aspetti normativi in merito alla privacy dei dati aziendali, dettagliando anche le motivazioni che portano alla creazione del modello organizzativo obiettivo di questo lavoro. Nel Capitolo 4 viene illustrato il modello organizzativo proposto, che presenta una struttra per casi e contiene una analisi dei casi più rilevanti dal punto di vista del business aziendale. Infine, nel Capitolo 5 vengono descritte le caratteristiche e le funzionalità di un software sviluppato sotto forma di Windows Service, nato in seguito a delle considerazioni basate sulle analisi di rischio svolte nel Capitolo 4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resource management is of paramount importance in network scenarios and it is a long-standing and still open issue. Unfortunately, while technology and innovation continue to evolve, our network infrastructure system has been maintained almost in the same shape for decades and this phenomenon is known as “Internet ossification”. Software-Defined Networking (SDN) is an emerging paradigm in computer networking that allows a logically centralized software program to control the behavior of an entire network. This is done by decoupling the network control logic from the underlying physical routers and switches that forward traffic to the selected destination. One mechanism that allows the control plane to communicate with the data plane is OpenFlow. The network operators could write high-level control programs that specify the behavior of an entire network. Moreover, the centralized control makes it possible to define more specific and complex tasks that could involve many network functionalities, e.g., security, resource management and control, into a single framework. Nowadays, the explosive growth of real time applications that require stringent Quality of Service (QoS) guarantees, brings the network programmers to design network protocols that deliver certain performance guarantees. This thesis exploits the use of SDN in conjunction with OpenFlow to manage differentiating network services with an high QoS. Initially, we define a QoS Management and Orchestration architecture that allows us to manage the network in a modular way. Then, we provide a seamless integration between the architecture and the standard SDN paradigm following the separation between the control and data planes. This work is a first step towards the deployment of our proposal in the University of California, Los Angeles (UCLA) campus network with differentiating services and stringent QoS requirements. We also plan to exploit our solution to manage the handoff between different network technologies, e.g., Wi-Fi and WiMAX. Indeed, the model can be run with different parameters, depending on the communication protocol and can provide optimal results to be implemented on the campus network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic Resonance Spectroscopy (MRS) is an advanced clinical and research application which guarantees a specific biochemical and metabolic characterization of tissues by the detection and quantification of key metabolites for diagnosis and disease staging. The "Associazione Italiana di Fisica Medica (AIFM)" has promoted the activity of the "Interconfronto di spettroscopia in RM" working group. The purpose of the study is to compare and analyze results obtained by perfoming MRS on scanners of different manufacturing in order to compile a robust protocol for spectroscopic examinations in clinical routines. This thesis takes part into this project by using the GE Signa HDxt 1.5 T at the Pavillion no. 11 of the S.Orsola-Malpighi hospital in Bologna. The spectral analyses have been performed with the jMRUI package, which includes a wide range of preprocessing and quantification algorithms for signal analysis in the time domain. After the quality assurance on the scanner with standard and innovative methods, both spectra with and without suppression of the water peak have been acquired on the GE test phantom. The comparison of the ratios of the metabolite amplitudes over Creatine computed by the workstation software, which works on the frequencies, and jMRUI shows good agreement, suggesting that quantifications in both domains may lead to consistent results. The characterization of an in-house phantom provided by the working group has achieved its goal of assessing the solution content and the metabolite concentrations with good accuracy. The goodness of the experimental procedure and data analysis has been demonstrated by the correct estimation of the T2 of water, the observed biexponential relaxation curve of Creatine and the correct TE value at which the modulation by J coupling causes the Lactate doublet to be inverted in the spectrum. The work of this thesis has demonstrated that it is possible to perform measurements and establish protocols for data analysis, based on the physical principles of NMR, which are able to provide robust values for the spectral parameters of clinical use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I problemi di sicurezza nel software sono in crescita e gli strumenti di analisi adottati nei sistemi GNU/Linux non permettono di evidenziare le finestre di vulnerabilità a cui un pacchetto è stato soggetto. L'obiettivo di questa tesi è quello di sviluppare uno strumento di computer forensics in grado di ricostruire, incrociando informazioni ottenute dal package manager con security advisory ufficiali, i problemi di sicurezza che potrebbero aver causato una compromissione del sistema in esame.