886 resultados para real option analysis
Resumo:
Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.
Resumo:
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Resumo:
International Conference with Peer Review 2012 IEEE International Conference in Geoscience and Remote Sensing Symposium (IGARSS), 22-27 July 2012, Munich, Germany
Resumo:
A compreensão das interacções entre os oceanos, a linha de costa, a qualidade do ar e as florestas só será possível através do registo e análise de informação geo-temporalmente referenciada. Mas a monitorização de grandes áreas apresenta o problema da cobertura espacial e temporal, e os custos nela envolvidos pela impossibilidade de disseminar a quantidade de estações de monitorização necessários à compreensão do fenómeno. É necessário então definir metodologias de colocação de sensores e recolha de informação de forma robusta, económica e temporalmente útil. Nesta dissertação apresentamos uma estratégia de monitorização ambiental, para meios hídricos, (ou de grande dimensão) que baseada em sistemas móveis e alguns princípios da geoestatística, fornece uma ferramenta de monitorização mais económica, sem prejuízo da qualidade de informação. Os modelos usados na geoestatística assentam na ideia de que medidas mais próximas tendem a serem mais parecidas do que valores observados em locais distantes e fornece métodos para quantificar esta correlação espacial e incorporá-la na estimação. Os resultados obtidos sustentam a convicção do uso de veículos móveis em redes de sensores e que contribuímos para responder à seguinte questão “Qual a técnica que nos permite com poucos sensores monitorizar grandes áreas?”. A solução passará por modelos de estimação de grandezas utilizados na geoestatística associados a sistemas móveis.
Resumo:
A methodology based on microwave-assisted extraction (MAE) and LC with fluorescence detection (FLD) was investigated for the efficient determination of 15 polycyclic aromatic hydrocarbons (PAHs) regarded as priority pollutants by the US Environmental Protection Agency and dibenzo(a,l)pyrene in atmospheric particulate samples. PAHs were successfully extracted from real outdoor particulate matter (PM) samples with recoveries ranging from 81.4±8.8 to 112.0±1.1%, for all the compounds except for naphthalene (62.3±18.0%) and anthracene (67.3±5.7%), under the optimum MAE conditions (30.0 mL of ACN for 20 min at 110ºC). No clean-up steps were necessary prior to LC analysis. LOQs ranging from 0.0054 ng/m3 for benzo( a)anthracene to 0.089 ng/m3 for naphthalene were reached. The validated MAE methodology was applied to the determination of PAHs from a set of real world PM samples collected in Oporto (north of Portugal). The sum of particulate-bound PAHs in outdoor PM ranged from 2.5 and 28 ng/m3.
Resumo:
A flow injection analysis (FIA) system comprising a cysteine selective electrode as detection system was developed for determination of this amino acid in pharmaceuticals. Several electrodes were constructed for this purpose, having PVC membranes with different ionic exchangers and mediator solvents. Better working characteristics were attained with membranes comprising o-nitrophenyl octyl ether as mediator solvent and a tetraphenylborate based ionic-sensor. Injection of 500 µL standard solutions into an ionic strength adjuster carrier (3x10-3 M) of barium chloride flowing at 2.4mL min-1, showed linearity ranges from 5.0x10-5 to 5.0x10-3 M, with slopes of 76.4±0.6mV decade-1 and R2>0.9935. Slope decreased significantly under the requirement of a pH adjustment, selected at 4.5. Interference of several compounds (sodium, potassium, magnesium, barium, glucose, fructose, and sucrose) was estimated by potentiometric selectivity coefficients and considered negligible. Analysis of real samples were performed and considered accurate, with a relative error to an independent method of +2.7%.
Resumo:
A procedure for the determination of seven indicator PCBs in soils and sediments using microwave-assisted extraction (MAE) and headspace solid-phase microextraction (HS-SPME) prior to GC-MS/MS is described. Optimization of the HS-SPME was carried out for the most important parameters such as extraction time, sample volume and temperature. The adopted methodology has reduced consumption of organic solvents and analysis runtime. Under the optimized conditions, the method detection limit ranged from 0.6 to 1 ng/g when 5 g of sample was extracted, the precision on real samples ranged from 4 to 21% and the recovery from 69 to 104%. The proposed method, which included the analysis of a certified reference material in its validation procedure, can be extended to several other PCBs and used in the monitoring of soil or sediments for the presence of PCBs.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? To this question, Fieldbus fundamentalists often argue that the two technologies are not comparable. In fact, Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. Where are the higher layers that permit building real industrial applications? And, taking for free that they are available, what is the impact of those protocols, mechanisms and application models on the overall performance of Ethernetbased distributed factory-floor applications? In this paper we provide some contributions that may pave the way towards providing some reasonable answers to these issues.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Profibus networks are widely used as the communication infrastructure for supporting distributed computer-controlled applications. Most of the times, these applications impose strict real-time requirements. Profibus-DP has gradually become the preferred Profibus application profile. It is usually implemented as a mono-master Profibus network, and is optimised for speed and efficiency. The aim of this paper is to analyse the real-time behaviour of this class of Profibus networks. Importantly, we develop a new methodology for evaluating the worst-case message response time in systems where high-priority and cyclic low-priority Profibus traffic coexist. The proposed analysis constitutes a powerful tool to guarantee prior to runtime the real-time behaviour of a distributed computer-controlled system based on a Profibus network, where the realtime traffic is supported either by high-priority or by cyclic poll Profibus messages.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within process control applications. Therefore, they constitute the foundation upon which real-time distributed computer-controlled systems can be implemented. P-NET is a fieldbus communication standard, which uses a virtual token-passing medium-access-control mechanism. In this paper pre-run-time schedulability conditions for supporting real-time traffic with P-NET networks are established. Essentially, formulae to evaluate the upper bound of the end-to-end communication delay in P-NET messages are provided. Using this upper bound, a feasibility test is then provided to check the timing requirements for accessing remote process variables. This paper also shows how P-NET network segmentation can significantly reduce the end-to-end communication delays for messages with stringent timing requirements.
Resumo:
Controller area network (CAN) is a fieldbus network suitable for small-scale distributed computer controlled systems (DCCS), being appropriate for sending and receiving short real-time messages at speeds up to 1 Mbit/sec. Several studies are available on how to guarantee the real-time requirements of CAN messages, providing preruntime schedulability conditions to guarantee the real-time communication requirements of DCCS traffic. Usually, it is considered that CAN guarantees atomic multicast properties by means of its extensive error detection/signaling mechanisms. However, there are some error situations where messages can be delivered in duplicate or delivered only by a subset of the receivers, leading to inconsistencies in the supported applications. In order to prevent such inconsistencies, a middleware for reliable communication in CAN is proposed, taking advantage of CAN synchronous properties to minimize the runtime overhead. Such middleware comprises a set of atomic multicast and consolidation protocols, upon which the reliable communication properties are guaranteed. The related timing analysis demonstrates that, in spite of the extra stack of protocols, the real-time properties of CAN are preserved since the predictability of message transfer is guaranteed.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.