894 resultados para Scaling Criteria
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
Essendo la percentuale di raccolta differenziata un indicatore che preso singolarmente risulta insufficiente a misurare la virtuosità di un Comune nella gestione dei rifiuti, si è elaborato un indicatore multi criteria che amplia l'orizzonte dell'analisi agli altri aspetti inerenti la gestione dei rifiuti. I criteri individuati sono otto: -Percentuale di raccolta differenziata -Produzione pro capite di rifiuti indifferenziati -Produzione pro capite di rifiuti totali -Impatto ambientale del sistema di raccolta e trattamento dei rifiuti -Costi del servizio -Tracciabilità dei rifiuti domestici -Coinvolgimento della popolazione -Comodità per il cittadino. Ad ogni Comune analizzato (il caso di studio è l'Unione Terre di Castelli) viene attribuito un punteggio per ogni criterio, in seguito moltiplicato per il peso attribuito al criterio stesso. I punteggi dati da ciascun criterio sono stati poi normalizzati in una scala da 0 a 1 con l'intervento di figure di esperti di ciascun ambito; i pesi sono stati determinati con la metodologia della Pairwise Comparison (T.Saaty, 1980) dai Sindaci e dagli Amministratori di tutti i Comuni del caso di studio. L'indicatore così costruito è stato poi applicato ai Comuni del caso di studio mostrando risultati, in termini di virtuosità, differenti da quelli prodotti dal solo indicatore di raccolta differenziata, evidenziando così l'importanza di un approccio multi disciplinare al tema dei rifiuti. L'indicatore, mostrando i punteggi ed il margine di miglioramento relativo a ciascun criterio, si è poi rivelato un efficace strumento di supporto alle decisioni per i Comuni nell'indirizzo degli investimenti in materia di gestione dei rifiuti.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Semiconductor nanowires (NWs) are one- or quasi one-dimensional systems whose physical properties are unique as compared to bulk materials because of their nanoscaled sizes. They bring together quantum world and semiconductor devices. NWs-based technologies may achieve an impact comparable to that of current microelectronic devices if new challenges will be faced. This thesis primarily focuses on two different, cutting-edge aspects of research over semiconductor NW arrays as pivotal components of NW-based devices. The first part deals with the characterization of electrically active defects in NWs. It has been elaborated the set-up of a general procedure which enables to employ Deep Level Transient Spectroscopy (DLTS) to probe NW arrays’ defects. This procedure has been applied to perform the characterization of a specific system, i.e. Reactive Ion Etched (RIE) silicon NW arrays-based Schottky barrier diodes. This study has allowed to shed light over how and if growth conditions introduce defects in RIE processed silicon NWs. The second part of this thesis concerns the bowing induced by electron beam and the subsequent clustering of gallium arsenide NWs. After a justified rejection of the mechanisms previously reported in literature, an original interpretation of the electron beam induced bending has been illustrated. Moreover, this thesis has successfully interpreted the formation of NW clusters in the framework of the lateral collapse of fibrillar structures. These latter are both idealized models and actual artificial structures used to study and to mimic the adhesion properties of natural surfaces in lizards and insects (Gecko effect). Our conclusion are that mechanical and surface properties of the NWs, together with the geometry of the NW arrays, play a key role in their post-growth alignment. The same parameters open, then, to the benign possibility of locally engineering NW arrays in micro- and macro-templates.
Resumo:
Nell'elaborato si analizzano aspetti della teoria dei giochi e della multi-criteria decision-making. La riflessione serve a proporre le basi per un nuovo modello di protocollo di routing in ambito Mobile Ad-hoc Networks. Questo prototipo mira a generare una rete che riesca a gestirsi in maniera ottimale grazie ad un'acuta tecnica di clusterizzazione. Allo stesso tempo si propone come obiettivo il risparmio energetico e la partecipazione collaborativa di tutti i componenti.
Resumo:
A method for automatic scaling of oblique ionograms has been introduced. This method also provides a rejection procedure for ionograms that are considered to lack sufficient information, depicting a very good success rate. Observing the Kp index of each autoscaled ionogram, can be noticed that the behavior of the autoscaling program does not depend on geomagnetic conditions. The comparison between the values of the MUF provided by the presented software and those obtained by an experienced operator indicate that the procedure developed for detecting the nose of oblique ionogram traces is sufficiently efficient and becomes much more efficient as the quality of the ionograms improves. These results demonstrate the program allows the real-time evaluation of MUF values associated with a particular radio link through an oblique radio sounding. The automatic recognition of a part of the trace allows determine for certain frequencies, the time taken by the radio wave to travel the path between the transmitter and receiver. The reconstruction of the ionogram traces, suggests the possibility of estimating the electron density between the transmitter and the receiver, from an oblique ionogram. The showed results have been obtained with a ray-tracing procedure based on the integration of the eikonal equation and using an analytical ionospheric model with free parameters. This indicates the possibility of applying an adaptive model and a ray-tracing algorithm to estimate the electron density in the ionosphere between the transmitter and the receiver An additional study has been conducted on a high quality ionospheric soundings data set and another algorithm has been designed for the conversion of an oblique ionogram into a vertical one, using Martyn's theorem. This allows a further analysis of oblique soundings, throw the use of the INGV Autoscala program for the automatic scaling of vertical ionograms.
Resumo:
The aims of this research study is to explore the opportunity to set up Performance Objectives (POs) parameters for specific risks in RTE products to propose for food industries and food authorities. In fact, even if microbiological criteria for Salmonella and Listeria monocytogenes Ready-to-Eat (RTE) products are included in the European Regulation, these parameters are not risk based and no microbiological criteria for Bacillus cereus in RTE products is present. For these reasons the behaviour of Salmonella enterica in RTE mixed salad, the microbiological characteristics in RTE spelt salad, and the definition of POs for Bacillus cereus and Listeria monocytogenes in RTE spelt salad has been assessed. Based on the data produced can be drawn the following conclusions: 1. A rapid growth of Salmonella enterica may occurr in mixed ingredient salads, and strict temperature control during the production chain of the product is critical. 2. Spelt salad is characterized by the presence of high number of Lactic Acid Bacteria. Listeria spp. and Enterobacteriaceae, on the contrary, did not grow during the shlef life, probably due to the relevant metabolic activity of LAB. 3. The use of spelt and cheese compliant with the suggested POs might significantly reduce the incidence of foodborne intoxications due to Bacillus cereus and Listeria monocytogenes and the proportions of recalls, causing huge economic losses for food companies commercializing RTE products. 4. The approach to calculate the POs values and reported in my work can be easily adapted to different food/risk combination as well as to any changes in the formulation of the same food products. 5. The optimized sampling plans in term of number of samples to collect can be derive in order to verify the compliance to POs values selected.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
Symptom development during the prodromal phase of psychosis was explored retrospectively in first-episode psychosis patients with special emphasis on the assumed time-related syndromic sequence of "unspecific symptoms (UN)-predictive basic symptoms (BS)-attenuated psychotic symptoms (APS)-(transient) psychotic symptoms (PS)." Onset of syndromes was defined by first occurrence of any of their respective symptoms. Group means were inspected for time differences between syndromes and influence of sociodemographic and clinical characteristics on the recalled sequence. The sequence of "UN-BS/APS-PS" was clearly supported, and both BS and, though slightly less, APS were highly sensitive. However, onset of BS and APS did not show significant time difference in the whole sample (N = 126; 90% schizophrenia), although when each symptom is considered independently, APS tended to occur later than first predictive BS. On descriptive level, about one-third each recalled an earlier, equal and later onset of BS compared with APS. Level of education showed the greatest impact on the recall of the hypothesized sequence. Thereby, those with a higher school-leaving certificate supported the assumed sequence, whereas those of low educational background retrospectively dated APS before BS. These findings rather point out recognition and recall bias inherent to the retrospective design than true group characteristics. Future long-term prospective studies will have to explore this conclusively. However, as regards the criteria, the results support the notion of BS as at least a complementary approach to the ultrahigh risk criteria, which may also allow for an earlier detection of psychosis.
Resumo:
The Pulmonary Embolism Rule-out Criteria (PERC) rule is a clinical diagnostic rule designed to exclude pulmonary embolism (PE) without further testing. We sought to externally validate the diagnostic performance of the PERC rule alone and combined with clinical probability assessment based on the revised Geneva score.
Resumo:
Raltegravir (RAL) achieved remarkable virologic suppression rates in randomized-clinical trials, but today efficacy data and factors for treatment failures in a routine clinical care setting are limited.
Resumo:
The aim of the study was to evaluate the impact of smoking on a prolongated chlorhexidine digluconate regimen after scaling and root planing. Forty-two smokers (test group) and 85 nonsmoking patients (control group) with generalized chronic periodontitis were examined for clinical attachment level (CAL), probing depth (PD), bleeding on probing (BoP), and Plaque Index (Pl) at baseline and after 1 and 3 months. During scaling and root planing, a 0.2% chlorhexidine digluconate solution and a 1% chlorhexidine digluconate gel were used. The subjects used a 0.2% chlorhexidine digluconate solution twice daily for 3 months. The Mann-Whitney U and Wilcoxon tests were used for statistical analysis. There were significant improvements of all studied variables after 1 and 3 months in both groups. After 3 months, the mean improvement in the test group was 1.62 mm for CAL, 2.85 mm for PD, and 48% for BoP; in the control group, the values were 2.18 mm for CAL, 2.81 mm for PD, and 47% for BoP. Only the maximum changes of CAL between 1 and 3 months (test group, 0.32 mm vs 0.69 mm in the control group) and PD (test group, 0.47 mm vs 0.76 mm in the control group) were significantly different between the groups (P < .05 and P = .05, respectively). The present data appear to suggest that the use of chlorhexidine digluconate twice daily during a period of 3 months following nonsurgical periodontal therapy may result in significant clinical improvements in smokers and nonsmokers.
Resumo:
To investigate the impact on microbiologic variables of full-mouth scaling (FMS) and conventional scaling and root planing (cSRP) after 12 months.