914 resultados para Deterministic nanofabrication
Resumo:
Chain topology, including branch node, chain link and cross-link dynamics that contribute to the number of elastically active strands and junctions, are calculated using purely deterministic derivations. Solutions are not coupled to population density distributions. An eigenzeit transformation assists in the conversion of expressions derived by chemical reaction principles from time to conversion space, yielding transport phenomena type expressions where the rate of change in the molar concentrations of branch nodes with respect to conversion is expressed as functions of the fraction of reactive sites on precursors and reactants. Analogies are hypothesized to exist in cross-linking space that effectively distribute branch nodes with i reacted moieties between cross-links having j bonds extending to the gel. To obtain solutions, reacted sites on nodes or links with finite chain extensions are examined in terms of stoichiometry associated with covalent bonding. Solutions replicate published results based on Miller and Macosko’s recursive procedure and results obtained from truncated weighted sums of population density distributions as suggested by Flory.
Resumo:
Complex networks have attracted increasing interest from various fields of science. It has been demonstrated that each complex network model presents specific topological structures which characterize its connectivity and dynamics. Complex network classification relies on the use of representative measurements that describe topological structures. Although there are a large number of measurements, most of them are correlated. To overcome this limitation, this paper presents a new measurement for complex network classification based on partially self-avoiding walks. We validate the measurement on a data set composed by 40000 complex networks of four well-known models. Our results indicate that the proposed measurement improves correct classification of networks compared to the traditional ones. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4737515]
Resumo:
In this paper, the effects of uncertainty and expected costs of failure on optimum structural design are investigated, by comparing three distinct formulations of structural optimization problems. Deterministic Design Optimization (DDO) allows one the find the shape or configuration of a structure that is optimum in terms of mechanics, but the formulation grossly neglects parameter uncertainty and its effects on structural safety. Reliability-based Design Optimization (RBDO) has emerged as an alternative to properly model the safety-under-uncertainty part of the problem. With RBDO, one can ensure that a minimum (and measurable) level of safety is achieved by the optimum structure. However, results are dependent on the failure probabilities used as constraints in the analysis. Risk optimization (RO) increases the scope of the problem by addressing the compromising goals of economy and safety. This is accomplished by quantifying the monetary consequences of failure, as well as the costs associated with construction, operation and maintenance. RO yields the optimum topology and the optimum point of balance between economy and safety. Results are compared for some example problems. The broader RO solution is found first, and optimum results are used as constraints in DDO and RBDO. Results show that even when optimum safety coefficients are used as constraints in DDO, the formulation leads to configurations which respect these design constraints, reduce manufacturing costs but increase total expected costs (including expected costs of failure). When (optimum) system failure probability is used as a constraint in RBDO, this solution also reduces manufacturing costs but by increasing total expected costs. This happens when the costs associated with different failure modes are distinct. Hence, a general equivalence between the formulations cannot be established. Optimum structural design considering expected costs of failure cannot be controlled solely by safety factors nor by failure probability constraints, but will depend on actual structural configuration. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Il CP-ESFR è un progetto integrato di cooperazione europeo sui reattori a sodio SFR realizzato sotto il programma quadro EURATOM 7, che unisce il contributo di venticinque partner europei. Il CP-ESFR ha l'ambizione di contribuire all'istituzione di una "solida base scientifica e tecnica per il reattore veloce refrigerato a sodio, al fine di accelerare gli sviluppi pratici per la gestione sicura dei rifiuti radioattivi a lunga vita, per migliorare le prestazioni di sicurezza, l'efficienza delle risorse e il costo-efficacia di energia nucleare al fine di garantire un sistema solido e socialmente accettabile di protezione della popolazione e dell'ambiente contro gli effetti delle radiazioni ionizzanti. " La presente tesi di laurea è un contributo allo sviluppo di modelli e metodi, basati sull’uso di codici termo-idraulici di sistema, per l’ analisi di sicurezza di reattori di IV Generazione refrigerati a metallo liquido. L'attività è stata svolta nell'ambito del progetto FP-7 PELGRIMM ed in sinergia con l’Accordo di Programma MSE-ENEA(PAR-2013). Il progetto FP7 PELGRIMM ha come obbiettivo lo sviluppo di combustibili contenenti attinidi minori 1. attraverso lo studio di due diverse forme: pellet (oggetto della presente tesi) e spherepac 2. valutandone l’impatto sul progetto del reattore CP-ESFR. La tesi propone lo sviluppo di un modello termoidraulico di sistema dei circuiti primario e intermedio del reattore con il codice RELAP5-3D© (INL, US). Tale codice, qualificato per il licenziamento dei reattori nucleari ad acqua, è stato utilizzato per valutare come variano i parametri del core del reattore rilevanti per la sicurezza (es. temperatura di camicia e di centro combustibile, temperatura del fluido refrigerante, etc.), quando il combustibile venga impiegato per “bruciare” gli attinidi minori (isotopi radioattivi a lunga vita contenuti nelle scorie nucleari). Questo ha comportato, una fase di training sul codice, sui suoi modelli e sulle sue capacità. Successivamente, lo sviluppo della nodalizzazione dell’impianto CP-ESFR, la sua qualifica, e l’analisi dei risultati ottenuti al variare della configurazione del core, del bruciamento e del tipo di combustibile impiegato (i.e. diverso arricchimento di attinidi minori). Il testo è suddiviso in sei sezioni. La prima fornisce un’introduzione allo sviluppo tecnologico dei reattori veloci, evidenzia l’ambito in cui è stata svolta questa tesi e ne definisce obbiettivi e struttura. Nella seconda sezione, viene descritto l’impianto del CP-ESFR con attenzione alla configurazione del nocciolo e al sistema primario. La terza sezione introduce il codice di sistema termico-idraulico utilizzato per le analisi e il modello sviluppato per riprodurre l’impianto. Nella sezione quattro vengono descritti: i test e le verifiche effettuate per valutare le prestazioni del modello, la qualifica della nodalizzazione, i principali modelli e le correlazioni più rilevanti per la simulazione e le configurazioni del core considerate per l’analisi dei risultati. I risultati ottenuti relativamente ai parametri di sicurezza del nocciolo in condizioni di normale funzionamento e per un transitorio selezionato sono descritti nella quinta sezione. Infine, sono riportate le conclusioni dell’attività.
Resumo:
Epileptic seizures typically reveal a high degree of stereotypy, that is, for an individual patient they are characterized by an ordered and predictable sequence of symptoms and signs with typically little variability. Stereotypy implies that ictal neuronal dynamics might have deterministic characteristics, presumably most pronounced in the ictogenic parts of the brain, which may provide diagnostically and therapeutically important information. Therefore the goal of our study was to search for indications of determinism in periictal intracranial electroencephalography (EEG) studies recorded from patients with pharmacoresistent epilepsy.
Resumo:
A silicon-based microcell was fabricated with the potential for use in in-situ transmission electron microscopy (TEM) of materials under plasma processing. The microcell consisted of 50 nm-thick film of silicon nitride observation window with 60μm distance between two electrodes. E-beam scattering Mont Carlo simulation showed that the silicon nitride thin film would have very low scattering effect on TEM primary electron beam accelerated at 200 keV. Only 4.7% of primary electrons were scattered by silicon nitride thin film and the Ar gas (60 μm thick at 1 atm pressure) filling the space between silicon nitride films. Theoretical calculation also showed low absorption of high-energy e-beam electrons. Because the plasma cell needs to survive the high vacuum TEM chamber while holding 1 atm internal pressure, a finite element analysis was performed to find the maximum stress the low-stress silicon nitride thin film experienced under pressure. Considering the maximum burst stress of low-stress silicon nitride thin film, the simulation results showed that the 50 nm silicon nitride thin film can be used in TEM under 1 atm pressure as the observation window. Ex-situ plasma generation experiment demonstrated that air plasma can be ignited at DC voltage of 570. A Scanning electron microscopy (SEM) analysis showed that etching and deposition occurred during the plasma process and larger dendrites formed on the positive electrode.
Resumo:
The integration of block-copolymers and nanoimprint lithography presents a novel and cost-effective approach to achieving nanoscale patterning capabilities. The authors demonstrate the fabrication of a surface-enhanced Raman scattering device using templates created by the block-copolymers nanoimprint lithography integrated method.