849 resultados para Distributed Calculations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural properties of model membranes, such as lipid vesicles, may be investigated through the addition of fluorescent probes. After incorporation, the fluorescent molecules are excited with linearly polarized light and the fluorescence emission is depolarized due to translational as well as rotational diffusion during the lifetime of the excited state. The monitoring of emitted light is undertaken through the technique of time-resolved fluorescence: the intensity of the emitted light informs on fluorescence decay times, and the decay of the components of the emitted light yield rotational correlation times which inform on the fluidity of the medium. The fluorescent molecule DPH, of uniaxial symmetry, is rather hydrophobic and has collinear transition and emission moments. It has been used frequently as a probe for the monitoring of the fluidity of the lipid bilayer along the phase transition of the chains. The interpretation of experimental data requires models for localization of fluorescent molecules as well as for possible restrictions on their movement. In this study, we develop calculations for two models for uniaxial diffusion of fluorescent molecules, such as DPH, suggested in several articles in the literature. A zeroth order test model consists of a free randomly rotating dipole in a homogeneous solution, and serves as the basis for the study of the diffusion of models in anisotropic media. In the second model, we consider random rotations of emitting dipoles distributed within cones with their axes perpendicular to the vesicle spherical geometry. In the third model, the dipole rotates in the plane of the of bilayer spherical geometry, within a movement that might occur between the monolayers forming the bilayer. For each of the models analysed, two methods are used by us in order to analyse the rotational diffusion: (I) solution of the corresponding rotational diffusion equation for a single molecule, taking into account the boundary conditions imposed by the models, for the probability of the fluorescent molecule to be found with a given configuration at time t. Considering the distribution of molecules in the geometry proposed, we obtain the analytical expression for the fluorescence anisotropy, except for the cone geometry, for which the solution is obtained numerically; (II) numerical simulations of a restricted rotational random walk in the two geometries corresponding to the two models. The latter method may be very useful in the cases of low-symmetry geometries or of composed geometries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent advances and promises in nanoscience and nanotechnology have been focused on hexagonal materials, mainly on carbon-based nanostructures. Recently, new candidates have been raised, where the greatest efforts are devoted to a new hexagonal and buckled material made of silicon, named Silicene. This new material presents an energy gap due to spin-orbit interaction of approximately 1.5 meV, where the measurement of quantum spin Hall effect(QSHE) can be made experimentally. Some investigations also show that the QSHE in 2D low-buckled hexagonal structures of germanium is present. Since the similarities, and at the same time the differences, between Si and Ge, over the years, have motivated a lot of investigations in these materials. In this work we performed systematic investigations on the electronic structure and band topology in both ordered and disordered SixGe1-x alloys monolayer with 2D honeycomb geometry by first-principles calculations. We show that an applied electric field can tune the gap size for both alloys. However, as a function of electric field, the disordered alloy presents a W-shaped behavior, similarly to the pure Si or Ge, whereas for the ordered alloy a V-shaped behavior is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graphene has received great attention due to its exceptional properties, which include corners with zero effective mass, extremely large mobilities, this could render it the new template for the next generation of electronic devices. Furthermore it has weak spin orbit interaction because of the low atomic number of carbon atom in turn results in long spin coherence lengths. Therefore, graphene is also a promising material for future applications in spintronic devices - the use of electronic spin degrees of freedom instead of the electron charge. Graphene can be engineered to form a number of different structures. In particular, by appropriately cutting it one can obtain 1-D system -with only a few nanometers in width - known as graphene nanoribbon, which strongly owe their properties to the width of the ribbons and to the atomic structure along the edges. Those GNR-based systems have been shown to have great potential applications specially as connectors for integrated circuits. Impurities and defects might play an important role to the coherence of these systems. In particular, the presence of transition metal atoms can lead to significant spin-flip processes of conduction electrons. Understanding this effect is of utmost importance for spintronics applied design. In this work, we focus on electronic transport properties of armchair graphene nanoribbons with adsorbed transition metal atoms as impurities and taking into account the spin-orbit effect. Our calculations were performed using a combination of density functional theory and non-equilibrium Greens functions. Also, employing a recursive method we consider a large number of impurities randomly distributed along the nanoribbon in order to infer, for different concentrations of defects, the spin-coherence length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The energetic stability and the electronic properties of vacancies (VX) and antisites (XY) in PbSe and PbTe are investigated. PbSe and PbTe are narrow band gap semiconductors and have the potential to be used in infrared detectors, laser, and diodes. They are also of special interest for thermoelectric devices (TE). The calculations are based in the Density Functional Theory (DFT) and the General Gradient Approximation (GGA) for the exchange-correlation term, as implemented in the VASP code. The core and valence electrons are described by the Projected Augmented Wave (PAW) and the Plane Wave (PW) methods, respectively. The defects are studied in the bulk and nanowire (NW) system. Our results show that intrinsec defects (vacancies and antisites) in PbTe have lower formation energies in the NW as compared to the bulk and present a trend in migrate to the surface of the NW. For the PbSe we obtain similar results when compare the formation energy for the bulk and NW. However, the Pb vacancy and the antisites are more stable in the core of the NW. The intrinsec defects are shallow defects for the bulk system. For both PbSe and PbTe VPb is a shallow acceptor defect and VSe and VT e are shallow donor defects for the PbSe and PbTe, respectively. Similar electronic properties are observed for the antisites. For the Pb in the anion site we obtain an n-type semiconductor for both PbSe and PbTe, SeP b is a p-type for the PbSe, and T eP b is a n-type for PbTe. Due the quantum con¯nement effects present in the NW (the band gap open), these defects have different electronic properties for the NW as compared to the bulk. Now these defects give rise to electronic levels in the band gap of the PbTe NW and the VT e present a metallic character. For the PbSe NW a p-type and a n-type semiconductor is obtained for the VP b and P bSe, respectively. On the other hand, deep electronic levels are present in the band gap for the VSe and SePb. These results show that due an enhanced in the electronic density of states (DOS) near the Fermi energy, the defective PbSe and PbTe are candidates for efficient TE devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] This paper presents an interpretation of a classic optical flow method by Nagel and Enkelmann as a tensor-driven anisotropic diffusion approach in digital image analysis. We introduce an improvement into the model formulation, and we establish well-posedness results for the resulting system of parabolic partial differential equations. Our method avoids linearizations in the optical flow constraint, and it can recover displacement fields which are far beyond the typical one-pixel limits that are characteristic for many differential methods for optical flow recovery. A robust numerical scheme is presented in detail. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales. The high accuracy of the proposed method is demonstrated by means of a synthetic and a real-world image sequence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN]Here we present experimental data of different properties for a set of binary mixtures composed of water or alkanols (methanol to butanol) with an ionic liquid (IL), butylpyridinium tetrafluoroborate [bpy][BF4]. Solubility data (xIL,T) are presented for each of the mixtures, including water, which is found to have a small interval of compositions in IL, xIL, with immiscibility. In each case, the upper critical solubility temperature (UCST) is determined and a correlation was observed between the UCST and the nature of the compounds in the mixtures. Miscibility curves establish the composition and temperature intervals where thermodynamic properties of the mixtures, such as enthalpies Hm E and volumes Vm E, can be determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monte Carlo (MC) simulation techniques are becoming very common in the Medical Physicists community. MC can be used for modeling Single Photon Emission Computed Tomography (SPECT) and for dosimetry calculations. 188Re, is a promising candidate for radiotherapeutic production and understanding the mechanisms of the radioresponse of tumor cells "in vitro" is of crucial importance as a first step before "in vivo" studies. The dosimetry of 188Re, used to target different lines of cancer cells, has been evaluated by the MC code GEANT4. The simulations estimate the average energy deposition/per event in the biological samples. The development of prototypes for medical imaging, based on LaBr3:Ce scintillation crystals coupled with a position sensitive photomultiplier, have been studied using GEANT4 simulations. Having tested, in the simulation, surface treatments different from the one applied to the crystal used in our experimental measurements, we found out that the Energy Resolution (ER) and the Spatial Resolution (SR) could be improved, in principle, by machining in a different way the lateral surfaces of the crystal. We have then studied a system able to acquire both echographic and scintigraphic images to let the medical operator obtain the complete anatomic and functional information for tumor diagnosis. The scintigraphic part of the detector is simulated by GEANT4 and first attempts to reconstruct tomographic images have been made using as method of reconstruction a back-projection standard algorithm. The proposed camera is based on slant collimators and LaBr3:Ce crystals. Within the Field of View (FOV) of the camera, it possible to distinguish point sources located in air at a distance of about 2 cm from each other. In particular conditions of uptake, tumor depth and dimension, the preliminary results show that the Signal to Noise Ratio (SNR) values obtained are higher than the standard detection limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.