792 resultados para cloud-based computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologias da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is about a PV system linked to the electric grid through power converters under cloud scope. The PV system is modeled by the five parameters equivalent circuit and a MPPT procedure is integrated into the modeling. The modeling for the converters models the association of a DC-DC boost with a three-level inverter. PI controllers are used with PWM by sliding mode control associated with space vector modulation controlling the booster and the inverter. A case study addresses a simulation to assess the performance of a PV system linked to the electric grid. Conclusions regarding the integration of the PV system into the electric grid are presented. © IFIP International Federation for Information Processing 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental optoelectronic characterization of a p-i'(a-SiC:H)-n/pi(a-Si:H)-n heterostructure with low conductivity doped layers shows the feasibility of tailoring channel bandwidth and wavelength by optical bias through back and front side illumination. Front background enhances light-to-dark sensitivity of the long and medium wavelength range, and strongly quenches the others. Back violet background enhances the magnitude in short wavelength range and reduces the others. Experiments have three distinct programmed time slots: control, hibernation and data. Throughout the control time slot steady light wavelengths illuminate either or both sides of the device, followed by the hibernation without any background illumination. The third time slot allows a programmable sequence of different wavelengths with an impulse frequency of 6000Hz to shine upon the sensor. Results show that the control time slot illumination has an influence on the data time slot which is used as a volatile memory with the set, reset logical functions. © IFIP International Federation for Information Processing 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to electricity market negotiating players. ALBidS uses a set of different strategies for providing decision support to market players. These strategies are used accordingly to their probability of success for each different context. The approach proposed in this paper uses a Bayesian network for deciding the most probably successful action at each time, depending on past events. The performance of the proposed methodology is tested using electricity market simulations in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets). MASCEM provides the means for simulating a real electricity market environment, based on real data from real electricity market operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia from Universidade Nova de Lisboa in partial fulfillment of the requirements for the obtention of the degree of Master of Science in Biotechnology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near real time media content personalisation is nowadays a major challenge involving media content sources, distributors and viewers. This paper describes an approach to seamless recommendation, negotiation and transaction of personalised media content. It adopts an integrated view of the problem by proposing, on the business-to-business (B2B) side, a brokerage platform to negotiate the media items on behalf of the media content distributors and sources, providing viewers, on the business-to-consumer (B2C) side, with a personalised electronic programme guide (EPG) containing the set of recommended items after negotiation. In this setup, when a viewer connects, the distributor looks up and invites sources to negotiate the contents of the viewer personal EPG. The proposed multi-agent brokerage platform is structured in four layers, modelling the registration, service agreement, partner lookup, invitation as well as item recommendation, negotiation and transaction stages of the B2B processes. The recommendation service is a rule-based switch hybrid filter, including six collaborative and two content-based filters. The rule-based system selects, at runtime, the filter(s) to apply as well as the final set of recommendations to present. The filter selection is based on the data available, ranging from the history of items watched to the ratings and/or tags assigned to the items by the viewer. Additionally, this module implements (i) a novel item stereotype to represent newly arrived items, (ii) a standard user stereotype for new users, (iii) a novel passive user tag cloud stereotype for socially passive users, and (iv) a new content-based filter named the collinearity and proximity similarity (CPS). At the end of the paper, we present off-line results and a case study describing how the recommendation service works. The proposed system provides, to our knowledge, an excellent holistic solution to the problem of recommending multimedia contents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE International Conference on Pervasive Computing and Communications (PerCom). 23 to 26, Mar, 2015, PhD Forum. Saint Louis, U.S.A..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Near real time media content personalisation is nowadays a major challenge involving media content sources, distributors and viewers. This paper describes an approach to seamless recommendation, negotiation and transaction of personalised media content. It adopts an integrated view of the problem by proposing, on the business-to-business (B2B) side, a brokerage platform to negotiate the media items on behalf of the media content distributors and sources, providing viewers, on the business-to-consumer (B2C) side, with a personalised electronic programme guide (EPG) containing the set of recommended items after negotiation. In this setup, when a viewer connects, the distributor looks up and invites sources to negotiate the contents of the viewer personal EPG. The proposed multi-agent brokerage platform is structured in four layers, modelling the registration, service agreement, partner lookup, invitation as well as item recommendation, negotiation and transaction stages of the B2B processes. The recommendation service is a rule-based switch hybrid filter, including six collaborative and two content-based filters. The rule-based system selects, at runtime, the filter(s) to apply as well as the final set of recommendations to present. The filter selection is based on the data available, ranging from the history of items watched to the ratings and/or tags assigned to the items by the viewer. Additionally, this module implements (i) a novel item stereotype to represent newly arrived items, (ii) a standard user stereotype for new users, (iii) a novel passive user tag cloud stereotype for socially passive users, and (iv) a new content-based filter named the collinearity and proximity similarity (CPS). At the end of the paper, we present off-line results and a case study describing how the recommendation service works. The proposed system provides, to our knowledge, an excellent holistic solution to the problem of recommending multimedia contents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things (IoT) has emerged as a paradigm over the last few years as a result of the tight integration of the computing and the physical world. The requirement of remote sensing makes low-power wireless sensor networks one of the key enabling technologies of IoT. These networks encompass several challenges, especially in communication and networking, due to their inherent constraints of low-power features, deployment in harsh and lossy environments, and limited computing and storage resources. The IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) [1] was proposed by the IETF ROLL (Routing Over Low-power Lossy links) working group and is currently adopted as an IETF standard in the RFC 6550 since March 2012. Although RPL greatly satisfied the requirements of low-power and lossy sensor networks, several issues remain open for improvement and specification, in particular with respect to Quality of Service (QoS) guarantees and support for mobility. In this paper, we focus mainly on the RPL routing protocol. We propose some enhancements to the standard specification in order to provide QoS guarantees for static as well as mobile LLNs. For this purpose, we propose OF-FL (Objective Function based on Fuzzy Logic), a new objective function that overcomes the limitations of the standardized objective functions that were designed for RPL by considering important link and node metrics, namely end-to-end delay, number of hops, ETX (Expected transmission count) and LQL (Link Quality Level). In addition, we present the design of Co-RPL, an extension to RPL based on the corona mechanism that supports mobility in order to overcome the problem of slow reactivity to frequent topology changes and thus providing a better quality of service mainly in dynamic networks application. Performance evaluation results show that both OF-FL and Co-RPL allow a great improvement when compared to the standard specification, mainly in terms of packet loss ratio and average network latency. 2015 Elsevier B.V. Al

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).