915 resultados para Case-Based Reasoning Shells


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study we report the results of an analysis, based on ribotyping of Corynebacterium diphtheriae intermedius strains isolated from a 9 years old child with clinical diphtheria and his 5 contacts. Quantitative analysis of RFLPs of rRNA was used to determine relatedness of these 7 C.diphtheriae strains providing support data in the diphtheria epidemiology. We have also tested those strains for toxigenicity in vitro by using the Elek's gel diffusion method and in vivo by using cell culture method on cultured monkey kidney cell (VERO cells). The hybridization results revealed that the 5 C.diphtheriae strains isolated from contacts and one isolated from the clinical case (nose case strain) had identical RFLP patterns with all 4 restriction endonucleases used, ribotype B. The genetic distance from this ribotype and ribotype A (throat case strain), that we initially assumed to be responsible for the illness of the patient, was of 0.450 showing poor genetic correlation among these two ribotypes. We found no significant differences concerned to the toxin production by using the cell culture method. In conclusion, the use of RFLPs of rRNA gene was successful in detecting minor differences in closely related toxigenic C.diphtheriae intermedius strains and providing information about genetic relationships among them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance appraisal increasingly assumes a more important role in any organizational environment. In the trucking industry, drivers are the company's image and for this reason it is important to develop and increase their performance and commitment to the company's goals. This paper aims to create a performance appraisal model for trucking drivers, based on a multi-criteria decision aid methodology. The PROMETHEE and MMASSI methodologies were adapted using the criteria used for performance appraisal by the trucking company studied. The appraisal involved all the truck drivers, their supervisors and the company's Managing Director. The final output is a ranking of the drivers, based on their performance, for each one of the scenarios used. The results are to be used as a decision-making tool to allocate drivers to the domestic haul service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A well documented case of hepatic fascioliasis (HF), successfully treated with triclabendazole, is reported. Predominant clinical manifestations were fever, marked eosinophilia and abdominal pain. Triclabendazole was given as two single oral doses of 10 mg/kg each. Neither side effects nor clinical or parasitological relapses were seen after three months of follow up Based on this experience and few other similar reports in the literature, triclabendazole might be a valid therapeutical alternative in the treatment of human fascioliasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The integration of growing amounts of distributed generation in power systems, namely at distribution networks level, has been fostered by energy policies in several countries around the world, including in Europe. This intensive integration of distributed, non-dispatchable, and natural sources based generation (including wind power) has caused several changes in the operation and planning of power systems and of electricity markets. Sometimes the available non-dispatchable generation is higher than the demand. This generation must be used; otherwise it is wasted if not stored or used to supply additional demand. New policies and market rules, as well as new players, are needed in order to competitively integrate all the resources. The methodology proposed in this paper aims at the maximization of the social welfare in a distribution network operated by a virtual power player that aggregates and manages the available energy resources. When facing a situation of excessive non-dispatchable generation, including wind power, real time pricing is applied in order to induce the increase of consumption so that wind curtailment is minimized. This method is especially useful when actual and day-ahead resources forecast differ significantly. The distribution network characteristics and concerns are addressed by including the network constraints in the optimization model. The proposed methodology has been implemented in GAMS optimization tool and its application is illustrated in this paper using a real 937-bus distribution network with 20.310 consumers and 548 distributed generators, some of them non-dispatchable and with must take contracts. The implemented scenario corresponds to a real day in Portuguese power system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Progress in Industrial Ecology, An International Journal, nº 4(5), p. 363-381

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity markets are complex environments, involving a large number of different entities, with specific characteristics and objectives, making their decisions and interacting in a dynamic scene. Game-theory has been widely used to support decisions in competitive environments; therefore its application in electricity markets can prove to be a high potential tool. This paper proposes a new scenario analysis algorithm, which includes the application of game-theory, to evaluate and preview different scenarios and provide players with the ability to strategically react in order to exhibit the behavior that better fits their objectives. This model includes forecasts of competitor players’ actions, to build models of their behavior, in order to define the most probable expected scenarios. Once the scenarios are defined, game theory is applied to support the choice of the action to be performed. Our use of game theory is intended for supporting one specific agent and not for achieving the equilibrium in the market. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multi-agent electricity market simulator that models market players and simulates their operation in the market. The scenario analysis algorithm has been tested within MASCEM and our experimental findings with a case study based on real data from the Iberian Electricity Market are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an electricity medium voltage (MV) customer characterization framework supportedby knowledge discovery in database (KDD). The main idea is to identify typical load profiles (TLP) of MVconsumers and to develop a rule set for the automatic classification of new consumers. To achieve ourgoal a methodology is proposed consisting of several steps: data pre-processing; application of severalclustering algorithms to segment the daily load profiles; selection of the best partition, corresponding tothe best consumers’ segmentation, based on the assessments of several clustering validity indices; andfinally, a classification model is built based on the resulting clusters. To validate the proposed framework,a case study which includes a real database of MV consumers is performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the first phase of the redevelopment of the Electric Vehicle Scenario Simulator (EVeSSi) tool. A new methodology to generate traffic demand scenarios for the Simulation of Urban MObility (SUMO) tool for urban traffic simulation is described. This methodology is based on a Portugal census database to generate a synthetic population for a given area under study. A realistic case study of a Portuguese city, Vila Real, is assessed. For this area the road network was created along with a synthetic population and public transport. The traffic results were obtained and an electric buses fleet was evaluated assuming that the actual fleet would be replaced in a near future. The energy requirements to charge the electric fleet overnight were estimated in order to evaluate the impacts that it would cause in the local electricity network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand response programs and models have been developed and implemented for an improved performance of electricity markets, taking full advantage of smart grids. Studying and addressing the consumers’ flexibility and network operation scenarios makes possible to design improved demand response models and programs. The methodology proposed in the present paper aims to address the definition of demand response programs that consider the demand shifting between periods, regarding the occurrence of multi-period demand response events. The optimization model focuses on minimizing the network and resources operation costs for a Virtual Power Player. Quantum Particle Swarm Optimization has been used in order to obtain the solutions for the optimization model that is applied to a large set of operation scenarios. The implemented case study illustrates the use of the proposed methodology to support the decisions of the Virtual Power Player in what concerns the duration of each demand response event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A population-based case-control design was used to investigate the association between migration, urbanisation and schistosomiasis in the Metropolitan Region of Recife, Northeast of Brazil. 1022 cases and 994 controls, aged 10 to 25, were selected. The natives and the migrants who come from endemic areas have a similar risk of infection. On the other hand, the risk of infection of migrants from nonendemic areas seems to be related with the time elapsed since their arrival in São Lourenço da Mata; those who have been living in that urban area for 5 or more years have a risk of infection similar to that of the natives. Those arriving in the metropolitan region of Recife mostly emigrate from "zona da mata" and "zona do agreste" in the state of Pernambuco. Due to the changes in the sugar agro-industry and to the increase in the area used for cattle grazing these workers were driven to villages and cities. The pattern of urbanisation created the conditions for the establishment of foci of transmission in São Lourenço da Mata.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the emergence of a global division of labour, the internationalisation of markets and cultures, the growing power of supranational organisations and the spread of new information technologies to every field of life, it starts to appear a different kind of society, different from the industrial society, and called by many as ‘the knowledge-based economy’, emphasizing the importance of information and knowledge in many areas of work and organisation of societies. Despite the common trends of evolution, these transformations do not necessarily produce a convergence of national and regional social and economic structures, but a diversity of realities emerging from the relations between economic and political context on one hand and the companies and their strategies on the other. In this sense, which future can we expect to the knowledge economy? How can we measure it and why is it important? This paper will present some results from the European project WORKS – Work organisation and restructuring in the knowledge society (6th Framework Programme), focusing the future visions and possible future trends in different countries, sectors and industries, given empirical evidences of the case studies applied in several European countries, underling the importance of foresight exercises to design policies, prevent uncontrolled risks and anticipate alternatives, leading to different ‘knowledge economies’ and not to the ‘knowled

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on a poster submitted to CONCORD 2011 - Conference on Corporate R&D: The dynamics of Europe's industrial structure and the growth of innovative firms, Sevilla, IPTS, 6 Out. 2011, Seville, http://www.eventisimo.com/concord2011/recibido.html