913 resultados para Deterministic Expander
Resumo:
The aim of this study was to evaluate the short-and long-term treatment effects of rapid maxillary expansion (RME) on the soft tissue facial profile of subjects treated with a modified acrylic-hyrax device. The sample comprised 10 males and 10 females in the mixed dentition. Their average age was 9.3 years +/- 10 months pre-treatment (T1), with a narrow maxilla and posterior crossbite, treated with a modified fixed maxillary expander with an occlusal splint. Lateral cephalometric radiographs obtained at T1, immediately post-expansion (T2), and after retention (T3) were used to determine possible changes in the soft tissue facial profile. The means and standard deviations for linear and angular cephalometric measurements were analysed statistically using analysis of variance and Tukey's test (alpha = 0.05). The measurements at T2 differed significantly from those at T1 and T3. However, RME did not produce any statistically significant alteration (P > 0.05) in the soft tissue profile for any of the cephalometric landmarks evaluated when compared at T1 and T3. The use of a fixed expander associated with an occlusal splint did not cause significant alterations in the soft tissue facial profile at T3. This modified device is effective for preventing the adverse vertical effects of RME such as an increase anterior face height in patients with a crossbite.
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Il CP-ESFR è un progetto integrato di cooperazione europeo sui reattori a sodio SFR realizzato sotto il programma quadro EURATOM 7, che unisce il contributo di venticinque partner europei. Il CP-ESFR ha l'ambizione di contribuire all'istituzione di una "solida base scientifica e tecnica per il reattore veloce refrigerato a sodio, al fine di accelerare gli sviluppi pratici per la gestione sicura dei rifiuti radioattivi a lunga vita, per migliorare le prestazioni di sicurezza, l'efficienza delle risorse e il costo-efficacia di energia nucleare al fine di garantire un sistema solido e socialmente accettabile di protezione della popolazione e dell'ambiente contro gli effetti delle radiazioni ionizzanti. " La presente tesi di laurea è un contributo allo sviluppo di modelli e metodi, basati sull’uso di codici termo-idraulici di sistema, per l’ analisi di sicurezza di reattori di IV Generazione refrigerati a metallo liquido. L'attività è stata svolta nell'ambito del progetto FP-7 PELGRIMM ed in sinergia con l’Accordo di Programma MSE-ENEA(PAR-2013). Il progetto FP7 PELGRIMM ha come obbiettivo lo sviluppo di combustibili contenenti attinidi minori 1. attraverso lo studio di due diverse forme: pellet (oggetto della presente tesi) e spherepac 2. valutandone l’impatto sul progetto del reattore CP-ESFR. La tesi propone lo sviluppo di un modello termoidraulico di sistema dei circuiti primario e intermedio del reattore con il codice RELAP5-3D© (INL, US). Tale codice, qualificato per il licenziamento dei reattori nucleari ad acqua, è stato utilizzato per valutare come variano i parametri del core del reattore rilevanti per la sicurezza (es. temperatura di camicia e di centro combustibile, temperatura del fluido refrigerante, etc.), quando il combustibile venga impiegato per “bruciare” gli attinidi minori (isotopi radioattivi a lunga vita contenuti nelle scorie nucleari). Questo ha comportato, una fase di training sul codice, sui suoi modelli e sulle sue capacità. Successivamente, lo sviluppo della nodalizzazione dell’impianto CP-ESFR, la sua qualifica, e l’analisi dei risultati ottenuti al variare della configurazione del core, del bruciamento e del tipo di combustibile impiegato (i.e. diverso arricchimento di attinidi minori). Il testo è suddiviso in sei sezioni. La prima fornisce un’introduzione allo sviluppo tecnologico dei reattori veloci, evidenzia l’ambito in cui è stata svolta questa tesi e ne definisce obbiettivi e struttura. Nella seconda sezione, viene descritto l’impianto del CP-ESFR con attenzione alla configurazione del nocciolo e al sistema primario. La terza sezione introduce il codice di sistema termico-idraulico utilizzato per le analisi e il modello sviluppato per riprodurre l’impianto. Nella sezione quattro vengono descritti: i test e le verifiche effettuate per valutare le prestazioni del modello, la qualifica della nodalizzazione, i principali modelli e le correlazioni più rilevanti per la simulazione e le configurazioni del core considerate per l’analisi dei risultati. I risultati ottenuti relativamente ai parametri di sicurezza del nocciolo in condizioni di normale funzionamento e per un transitorio selezionato sono descritti nella quinta sezione. Infine, sono riportate le conclusioni dell’attività.
Resumo:
Epileptic seizures typically reveal a high degree of stereotypy, that is, for an individual patient they are characterized by an ordered and predictable sequence of symptoms and signs with typically little variability. Stereotypy implies that ictal neuronal dynamics might have deterministic characteristics, presumably most pronounced in the ictogenic parts of the brain, which may provide diagnostically and therapeutically important information. Therefore the goal of our study was to search for indications of determinism in periictal intracranial electroencephalography (EEG) studies recorded from patients with pharmacoresistent epilepsy.
Resumo:
Abstract is not available.
Resumo:
This paper presents a deterministic continuous model of proliferative cell activity. The classical series of connected compartments is revisited along with a simple mathematical treatment of two hypotheses: constant transit times and harmonic Ts. Several examples are presented to support these ideas, both taken from previous literature and recent experiences with the fish Carassius auratus, developed at the Junta de Energía Nuclear, Madrid, Spain.
Resumo:
In this paper, we examine the issue of memory management in the parallel execution of logic programs. We concentrate on non-deterministic and-parallel schemes which we believe present a relatively general set of problems to be solved, including most of those encountered in the memory management of or-parallel systems. We present a distributed stack memory management model which allows flexible scheduling of goals. Previously proposed models (based on the "Marker model") are lacking in that they impose restrictions on the selection of goals to be executed or they may require consume a large amount of virtual memory. This paper first presents results which imply that the above mentioned shortcomings can have significant performance impacts. An extension of the Marker Model is then proposed which allows flexible scheduling of goals while keeping (virtual) memory consumption down. Measurements are presented which show the advantage of this solution. Methods for handling forward and backward execution, cut and roll back are discussed in the context of the proposed scheme. In addition, the paper shows how the same mechanism for flexible scheduling can be applied to allow the efficient handling of the very general form of suspension that can occur in systems which combine several types of and-parallelism and more sophisticated methods of executing logic programs. We believe that the results are applicable to many and- and or-parallel systems.
Resumo:
This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.