987 resultados para Organic distributed feedback laser
Resumo:
Hexagonal wireless sensor network refers to a network topology where a subset of nodes have six peer neighbors. These nodes form a backbone for multi-hop communications. In a previous work, we proposed the use of hexagonal topology in wireless sensor networks and discussed its properties in relation to real-time (bounded latency) multi-hop communications in large-scale deployments. In that work, we did not consider the problem of hexagonal topology formation in practice - which is the subject of this research. In this paper, we present a decentralized algorithm that forms the hexagonal topology backbone in an arbitrary but sufficiently dense network deployment. We implemented a prototype of our algorithm in NesC for TinyOS based platforms. We present data from field tests of our implementation, collected using a deployment of fifty wireless sensor nodes.
Resumo:
In distributed soft real-time systems, maximizing the aggregate quality-of-service (QoS) is a typical system-wide goal, and addressing the problem through distributed optimization is challenging. Subtasks are subject to unpredictable failures in many practical environments, and this makes the problem much harder. In this paper, we present a robust optimization framework for maximizing the aggregate QoS in the presence of random failures. We introduce the notion of K-failure to bound the effect of random failures on schedulability. Using this notion we define the concept of K-robustness that quantifies the degree of robustness on QoS guarantee in a probabilistic sense. The parameter K helps to tradeoff achievable QoS versus robustness. The proposed robust framework produces optimal solutions through distributed computations on the basis of Lagrangian duality, and we present some implementation techniques. Our simulation results show that the proposed framework can probabilistically guarantee sub-optimal QoS which remains feasible even in the presence of random failures.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
Due to the growing complexity and adaptability requirements of real-time embedded systems, which often exhibit unrestricted inter-dependencies among supported services and user-imposed quality constraints, it is increasingly difficult to optimise the level of service of a dynamic task set within an useful and bounded time. This is even more difficult when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand. This paper proposes an iterative refinement approach for a service’s QoS configuration taking into account services’ inter-dependencies and quality constraints, and trading off the achieved solution’s quality for the cost of computation. Extensive simulations demonstrate that the proposed anytime algorithm is able to quickly find a good initial solution and effectively optimises the rate at which the quality of the current solution improves as the algorithm is given more time to run. The added benefits of the proposed approach clearly surpass its reducedoverhead.
Resumo:
The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.
Resumo:
In global scientific experiments with collaborative scenarios involving multinational teams there are big challenges related to data access, namely data movements are precluded to other regions or Clouds due to the constraints on latency costs, data privacy and data ownership. Furthermore, each site is processing local data sets using specialized algorithms and producing intermediate results that are helpful as inputs to applications running on remote sites. This paper shows how to model such collaborative scenarios as a scientific workflow implemented with AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic), a decentralized framework offering a feasible solution to run the workflow activities on distributed data centers in different regions without the need of large data movements. The AWARD workflow activities are independently monitored and dynamically reconfigured and steering by different users, namely by hot-swapping the algorithms to enhance the computation results or by changing the workflow structure to support feedback dependencies where an activity receives feedback output from a successor activity. A real implementation of one practical scenario and its execution on multiple data centers of the Amazon Cloud is presented including experimental results with steering by multiple users.
Resumo:
In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
Two new metal- organic compounds {[Cu-3(mu(3)-4-(p)tz)(4)(mu(2)-N-3)(2)(DMF)(2)](DMF)(2)}(n) (1) and {[Cu(4ptz) (2)(H2O)(2)]}(n) (2) {4-ptz = 5-(4-pyridyl)tetrazolate} with 3D and 2D coordination networks, respectively, have been synthesized while studying the effect of reaction conditions on the coordination modes of 4-pytz by employing the [2 + 3] cycloaddition as a tool for generating in situ the 5-substituted tetrazole ligands from 4-pyridinecarbonitrile and NaN3 in the presence of a copper(II) salt. The obtained compounds have been structurally characterized and the topological analysis of 1 discloses a topologically unique trinodal 3,5,6-connected 3D network which, upon further simplification, results in a uninodal 8-connected underlying net with the bcu (body centred cubic) topology driven by the [Cu-3(mu(2)-N-3)(2)] cluster nodes and mu(3)-4-ptz linkers. In contrast, the 2D metal-organic network in 2 has been classified as a uninodal 4-connected underlying net with the sql [Shubnikov tetragonal plane net] topology assembled from the Cu nodes and mu(2)-4-ptz linkers. The catalytic investigations disclosed that 1 and 2 act as active catalyst precursors towards the microwave-assisted homogeneous oxidation of secondary alcohols (1-phenylethanol, cyclohexanol, 2-hexanol, 3-hexanol, 2-octanol and 3-octanol) with tert-butylhydroperoxide, leading to the yields of the corresponding ketones up to 86% (TOF = 430 h(-1)) and 58% (TOF = 290 h(-1)) in the oxidation of 1-phenylethanol and cyclohexanol, respectively, after 1 h under low power ( 10 W) microwave irradiation, and in the absence of any added solvent or additive.
Resumo:
The international Electrotechnical Commission (IEC) 61499 architecture incorporated several function block with which distributed control application may be developed, and how these are interpreted and executed. However, due the distributed nature of the control applications, many issues also need to be taken into account. Most of these are due to the new error model and failure modes of the distributed hardware on which the distributed application is executed and also due the incomplete standards definition of the execution models. IEC 61499 frameworks does not clarify how to handle with replication of software and hardware components. In this paper we propose a replication model for IEC 61499 applications and which mechanisms and protocols may be used for their support.
Resumo:
The aim of the present work was to characterize the internal structure of nanogratings generated inside bulk fused silica by ultrafast laser processing and to study the influence of diluted hydrofluoric acid etching on their structure. The nanogratings were inscribed at a depth of 100 mu m within fused silica wafers by a direct writing method, using 1030 nm radiation wavelength and the following processing parameters: E = 5 mu J, tau = 560 fs, f = 10 kHz, and v = 100 mu m/s. The results achieved show that the laser-affected regions are elongated ellipsoids with a typical major diameter of about 30 mu m and a minor diameter of about 6 mu m. The nanogratings within these regions are composed of alternating nanoplanes of damaged and undamaged material, with an average periodicity of 351 +/- 21 nm. The damaged nanoplanes contain nanopores randomly dispersed in a material containing a large density of defects. These nanopores present a roughly bimodal size distribution with average dimensions for each class of pores 65 +/- 20 x 16 +/- 8 x 69 +/- 16 nm(3) and 367 +/- 239 x 16 +/- 8 x 360 +/- 194 nm(3), respectively. The number and size of the nanopores increases drastically when an hydrofluoric acid treatment is performed, leading to the coalescence of these voids into large planar discontinuities parallel to the nanoplanes. The preferential etching of the damaged material by the hydrofluoric acid solution, which is responsible for the pores growth and coalescence, confirms its high defect density. (C) 2014 AIP Publishing LLC.
Resumo:
The morphological and structural modifications induced in sapphire by surface treatment with femtosecond laser radiation were studied. Single-crystal sapphire wafers cut parallel to the (0 1 2) planes were treated with 560 fs, 1030 nm wavelength laser radiation using wide ranges of pulse energy and repetition rate. Self-ordered periodic structures with an average spatial periodicity of similar to 300 nm were observed for fluences slightly higher than the ablation threshold. For higher fluences the interaction was more disruptive and extensive fracture, exfoliation, and ejection of ablation debris occurred. Four types of particles were found in the ablation debris: (a) spherical nanoparticles about 50 nm in diameter; (b) composite particles between 150 and 400 nm in size; (c) rounded resolidified particles about 100-500 nm in size; and (d) angular particles presenting a lamellar structure and deformation twins. The study of those particles by selected area electron diffraction showed that the spherical nanoparticles and the composite particles are amorphous, while the resolidified droplets and the angular particles, present a crystalline a-alumina structure, the same of the original material. Taking into consideration the existing ablation theories, it is proposed that the spherical nanoparticles are directly emitted from the surface in the ablation plume, while resolidified droplets are emitted as a result of the ablation process, in the liquid phase, in the low intensity regime, and by exfoliation, in the high intensity regime. Nanoparticle clusters are formed by nanoparticle coalescence in the cooling ablation plume. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia