933 resultados para Operator Error
Resumo:
La tesi presenta il criterio di regolarità di Wiener dell’ambito classico dell’operatore di Laplace ed in seguito alcune nozioni di teoria del potenziale e la dimostrazione del criterio nel caso dell’operatore del calore; in questa seconda sezione viene dedicata particolare attenzione alle formule di media e ad una diseguaglianza forte di Harnack, che risultano fondamentali nella trattazione dell’argomento centrale.
Resumo:
Es gibt kaum eine präzisere Beschreibung der Natur als die durch das Standardmodell der Elementarteilchen (SM). Es ist in der Lage bis auf wenige Ausnahmen, die Physik der Materie- und Austauschfelder zu beschreiben. Dennoch ist man interessiert an einer umfassenderen Theorie, die beispielsweise auch die Gravitation mit einbezieht, Neutrinooszillationen beschreibt, und die darüber hinaus auch weitere offene Fragen klärt. Um dieser Theorie ein Stück näher zu kommen, befasst sich die vorliegende Arbeit mit einem effektiven Potenzreihenansatz zur Beschreibung der Physik des Standardmodells und neuer Phänomene. Mit Hilfe eines Massenparameters und einem Satz neuer Kopplungskonstanten wird die Neue Physik parametrisiert. In niedrigster Ordnung erhält man das bekannte SM, Terme höherer Ordnung in der Kopplungskonstanten beschreiben die Effekte jenseits des SMs. Aus gewissen Symmetrie-Anforderungen heraus ergibt sich eine definierte Anzahl von effektiven Operatoren mit Massendimension sechs, die den hier vorgestellten Rechnungen zugrunde liegen. Wir berechnen zunächst für eine bestimmte Auswahl von Prozessen zugehörige Zerfallsbreiten bzw. Wirkungsquerschnitte in einem Modell, welches das SM um einen einzigen neuen effektiven Operator erweitertet. Unter der Annahme, dass der zusätzliche Beitrag zur Observablen innerhalb des experimentellen Messfehlers ist, geben wir anhand von vorliegenden experimentellen Ergebnissen aus leptonischen und semileptonischen Präzisionsmessungen Ausschlussgrenzen der neuen Kopplungen in Abhängigkeit von dem Massenparameter an. Die hier angeführten Resultate versetzen Physiker zum Einen in die Lage zu beurteilen, bei welchen gemessenen Observablen eine Erhöhung der Präzision sinnvoll ist, um bessere Ausschlussgrenzen angeben zu können. Zum anderen erhält man einen Anhaltspunkt, welche Prozesse im Hinblick auf Entdeckungen Neuer Physik interessant sind.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.
Resumo:
Nella tesi viene descritto il Network Diffusion Model, ovvero il modello di A. Ray, A. Kuceyeski, M. Weiner inerente i meccanismi di progressione della demenza senile. In tale modello si approssima l'encefalo sano con una rete cerebrale (ovvero un grafo pesato), si identifica un generale fattore di malattia e se ne analizza la propagazione che avviene secondo meccanismi analoghi a quelli di un'infezione da prioni. La progressione del fattore di malattia e le conseguenze macroscopiche di tale processo(tra cui principalmente l'atrofia corticale) vengono, poi, descritte mediante approccio matematico. I risultati teoretici vengono confrontati con quanto osservato sperimentalmente in pazienti affetti da demenza senile. Nella tesi, inoltre, si fornisce una panoramica sui recenti studi inerenti i processi neurodegenerativi e si costruisce il contesto matematico di riferimento del modello preso in esame. Si presenta una panoramica sui grafi finiti, si introduce l'operatore di Laplace sui grafi e si forniscono stime dall'alto e dal basso per gli autovalori. Al fine di costruire una cornice matematica completa si analizza la relazione tra caso discreto e continuo: viene descritto l'operatore di Laplace-Beltrami sulle varietà riemanniane compatte e vengono fornite stime dall'alto per gli autovalori dell'operatore di Laplace-Beltrami associato a tali varietà a partire dalle stime dall'alto per gli autovalori del laplaciano sui grafi finiti.
Resumo:
Il lavoro descrive la progettazione, l'implementazione e il test sperimentale di un meccanismo, integrato nel kernel Linux 4.0, dedicato al riconoscimento delle perdite dei frame Wi-Fi.
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
Modern imaging technologies, such as computed tomography (CT) techniques, represent a great challenge in forensic pathology. The field of forensics has experienced a rapid increase in the use of these new techniques to support investigations on critical cases, as indicated by the implementation of CT scanning by different forensic institutions worldwide. Advances in CT imaging techniques over the past few decades have finally led some authors to propose that virtual autopsy, a radiological method applied to post-mortem analysis, is a reliable alternative to traditional autopsy, at least in certain cases. The authors investigate the occurrence and the causes of errors and mistakes in diagnostic imaging applied to virtual autopsy. A case of suicide by a gunshot wound was submitted to full-body CT scanning before autopsy. We compared the first examination of sectional images with the autopsy findings and found a preliminary misdiagnosis in detecting a peritoneal lesion by gunshot wound that was due to radiologist's error. Then we discuss a new emerging issue related to the risk of diagnostic failure in virtual autopsy due to radiologist's error that is similar to what occurs in clinical radiology practice.
Resumo:
Patients can make contributions to the safety of chemotherapy administration but little is known about their motivations to participate in safety-enhancing strategies. The theory of planned behavior was applied to analyze attitudes, norms, behavioral control, and chemotherapy patients' intentions to participate in medical error prevention.
Resumo:
This study evaluated the operator variability of different finishing and polishing techniques. After placing 120 composite restorations (Tetric EvoCeram) in plexiglassmolds, the surface of the specimens was roughened in a standardized manner. Twelve operators with different experience levels polished the specimens using the following finishing/polishing procedures: method 1 (40 ?m diamond [40D], 15 ?m diamond [15D], 42 ?m silicon carbide polisher [42S], 6 ?m silicon carbide polisher [6S] and Occlubrush [O]); method 2 (40D, 42S, 6S and O); method 3 (40D, 42S, 6S and PoGo); method 4 (40D, 42S and PoGo) and method 5 (40D, 42S and O). The mean surface roughness (Ra) was measured with a profilometer. Differences between the methods were analyzed with non-parametric ANOVA and pairwise Wilcoxon signed rank tests (?=0.05). All the restorations were qualitatively assessed using SEM. Methods 3 and 4 showed the best polishing results and method 5 demonstrated the poorest. Method 5 was also most dependent on the skills of the operator. Except for method 5, all of the tested procedures reached a clinically acceptable surface polish of Ra?0.2 ?m. Polishing procedures can be simplified without increasing variability between operators and without jeopardizing polishing results.