914 resultados para case-based reasoning (CBR)
Resumo:
This paper appears in International Journal of Information and Communication Technology Education edited by Lawrence A. Tomei (Ed.) Copyright 2007, IGI Global, www.igi-global.com. Posted by permission of the publisher. URL:http://www.idea-group.com/journals/details.asp?id=4287.
Resumo:
Masters Thesis – Academic Year 2007/2008 - European Master’s Degree in Human Rights and Democratization (E.MA) - European Inter-university Centre for Human Rights and Democratization (EIUC) -Faculdade de Direito, Universidade Nova de Lisboa (UNL)
Resumo:
Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).
Resumo:
Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents’ perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically focus on the more pertinent contexts, and a distributed belief revision algorithm with two levels of consistency. This work contributions include: (i) a concise representation of the shared external facts; (ii) a simple and innovative methodology to achieve distributed context management; and (iii) a reduced inter-agent data exchange format. The different levels of consistency adopted were based on the relevance of the data under consideration: higher relevance data (detected inconsistencies) was granted global consistency while less relevant data (system facts) was assigned local consistency. These abilities are fully supported by the ATMS standard functionalities.
Resumo:
The Tagus estuary is bordered by the largest metropolitan area in Portugal, the Lisbon capital city council. It has suffered the impact of several major tsunamis in the past, as shown by a recent revision of the catalogue of tsunamis that struck the Portuguese coast over the past two millennia. Hence, the exposure of populations and infrastructure established along the riverfront comprises a critical concern for the civil protection services. The main objectives of this work are to determine critical inundation areas in Lisbon and to quantify the associated severity through a simple index derived from the local maximum of momentum flux per unit mass and width. The employed methodology is based on the mathematical modelling of a tsunami propagating along the estuary, resembling the one occurred on the 1 November of 1755 that followed the 8.5 M-w Great Lisbon Earthquake. The employed simulation tool was STAV-2D, a shallow-flow solver coupled with conservation equations for fine solid phases, and now featuring the novelty of discrete Lagrangian tracking of large debris. Different sets of initial conditions were studied, combining distinct tidal, atmospheric and fluvial scenarios, so that the civil protection services were provided with comprehensive information to devise public warning and alert systems and post-event mitigation intervention. For the most severe scenario, the obtained results have shown a maximum inundation extent of 1.29 km at the AlcA cent ntara valley and water depths reaching nearly 10 m across Lisbon's riverfront.
Resumo:
Given the significant impact that cultural events may have in local communities and the inherent organization complexity, it is important to understand their specificities. Most of the times cultural events disregard marketing and often marketing is distant from art. Thus an analysis of an inside perspective might bring significant returns to the organization of such an event. This paper considers the three editions (2011, 2012 and 2013) of a cultural event – Noc Noc – organized by a local association in the city of Guimarães, Portugal. Its format is based in analogous events, as Noc Noc intends to convert everyday spaces (homes, commercial outlets and a number of other buildings) into cultural spaces, processed and transformed by artists, hosts and audiences. By interviewing a sample of people (20) who have hosted this cultural event, sometimes doubling as artists, and by experiencing the three editions of the event, this paper illustrates how the internal public understands this particular cultural event, analyzing specifically their motivations, ways of acting and participating, as well as their relationship with the public, with the organization of the event and with art in general. Results support that artists and hosts motivations must be identified in a timely and appropriate moment, as well as their views of this particular cultural event, in order to keep them participating, since low budget cultural events such as this one may have a key role in small scale cities.
Resumo:
In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.
Resumo:
Glass fibre-reinforced plastics (GFRP) have been considered inherently difficult to recycle due to both: crosslinked nature of thermoset resins, which cannot be remoulded, and complex composition of the composite itself. Presently, most of the GFRP waste is landfilled leading to negative environmental impacts and supplementary added costs. With an increasing awareness of environmental matters and the subsequent desire to save resources, recycling would convert an expensive waste disposal into a profitable reusable material. In this study, efforts were made in order to recycle grinded GFRP waste, proceeding from pultrusion production scrap, into new and sustainable composite materials. For this purpose, GFRP waste recyclates, were incorporated into polyester based mortars as fine aggregate and filler replacements at different load contents and particle size distributions. Potential recycling solution was assessed by mechanical behaviour of resultant GFRP waste modified polymer mortars. Results revealed that GFRP waste filled polymer mortars present improved flexural and compressive behaviour over unmodified polyester based mortars, thus indicating the feasibility of the GFRP industrial waste reuse into concrete-polymer composite materials.
Resumo:
Paper developed for the unit “Innovation Economics and Management” of the PhD programme in Technology Assessment at the Universidade Nova de Lisboa in 2009-10 under the supervision of Prof. Maria Luísa Ferreira
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica, Especialidade de Sistemas Digitais, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
This paper presents the results of an exploratory study on knowledge management in Portuguese organizations. The study was based on a survey sent to one hundred of the main Portuguese organizations, in order to know their current practices relating knowledge management systems (KMS) usage and intellectual capital (IC) measurement. With this study, we attempted to understand what are the main tools used to support KM processes and activities in the organizations, and what metrics are pointed by organizations to measure their knowledge assets.
Resumo:
Performance evaluation increasingly assumes a more important role in any organizational environment. In the transport area, the drivers are the company’s image and for this reason it is important to develop and increase their performance and commitment to the company goals. This evaluation can be used to motivate driver to improve their performance and to discover training needs. This work aims to create a performance appraisal evaluation model of the drivers based on the multi-criteria decision aid methodology. The MMASSI (Multicriteria Methodology to Support Selection of Information Systems) methodology was adapted by using a template supporting the evaluation according to the freight transportation company in study. The evaluation process involved all drivers (collaborators being evaluated), their supervisors and the company management. The final output is a ranking of the drivers, based on their performance, for each one of the scenarios used.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
In the present study we report the results of an analysis, based on ribotyping of Corynebacterium diphtheriae intermedius strains isolated from a 9 years old child with clinical diphtheria and his 5 contacts. Quantitative analysis of RFLPs of rRNA was used to determine relatedness of these 7 C.diphtheriae strains providing support data in the diphtheria epidemiology. We have also tested those strains for toxigenicity in vitro by using the Elek's gel diffusion method and in vivo by using cell culture method on cultured monkey kidney cell (VERO cells). The hybridization results revealed that the 5 C.diphtheriae strains isolated from contacts and one isolated from the clinical case (nose case strain) had identical RFLP patterns with all 4 restriction endonucleases used, ribotype B. The genetic distance from this ribotype and ribotype A (throat case strain), that we initially assumed to be responsible for the illness of the patient, was of 0.450 showing poor genetic correlation among these two ribotypes. We found no significant differences concerned to the toxin production by using the cell culture method. In conclusion, the use of RFLPs of rRNA gene was successful in detecting minor differences in closely related toxigenic C.diphtheriae intermedius strains and providing information about genetic relationships among them.
Resumo:
Media content personalisation is a major challenge involving viewers as well as media content producer and distributor businesses. The goal is to provide viewers with media items aligned with their interests. Producers and distributors engage in item negotiations to establish the corresponding service level agreements (SLA). In order to address automated partner lookup and item SLA negotiation, this paper proposes the MultiMedia Brokerage (MMB) platform, which is a multiagent system that negotiates SLA regarding media items on behalf of media content producer and distributor businesses. The MMB platform is structured in four service layers: interface, agreement management, business modelling and market. In this context, there are: (i) brokerage SLA (bSLA), which are established between individual businesses and the platform regarding the provision of brokerage services; and (ii) item SLA (iSLA), which are established between producer and distributor businesses about the provision of media items. In particular, this paper describes the negotiation, establishment and enforcement of bSLA and iSLA, which occurs at the agreement and negotiation layers, respectively. The platform adopts a pay-per-use business model where the bSLA define the general conditions that apply to the related iSLA. To illustrate this process, we present a case study describing the negotiation of a bSLA instance and several related iSLA instances. The latter correspond to the negotiation of the Electronic Program Guide (EPG) for a specific end viewer.