830 resultados para Gradient-based approaches


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Science of the total environment 405(2008) 278-285

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sandwich structures with soft cores are widely used in applications where a high bending stiffness is required without compromising the global weight of the structure, as well as in situations where good thermal and damping properties are important parameters to observe. As equivalent single layer approaches are not the more adequate to describe realistically the kinematics and the stresses distributions as well as the dynamic behaviour of this type of sandwiches, where shear deformations and the extensibility of the core can be very significant, layerwise models may provide better solutions. Additionally and in connection with this multilayer approach, the selection of different shear deformation theories according to the nature of the material that constitutes the core and the outer skins can predict more accurately the sandwich behaviour. In the present work the authors consider the use of different shear deformation theories to formulate different layerwise models, implemented through kriging-based finite elements. The viscoelastic material behaviour, associated to the sandwich core, is modelled using the complex approach and the dynamic problem is solved in the frequency domain. The outer elastic layers considered in this work may also be made from different nanocomposites. The performance of the models developed is illustrated through a set of test cases. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Bioquímica pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.A presente dissertação foi preparada no âmbito do convénio bilateral existente entre a Universidade Nova de Lisboa e a Universidade de Vigo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Load forecasting has gradually becoming a major field of research in electricity industry. Therefore, Load forecasting is extremely important for the electric sector under deregulated environment as it provides a useful support to the power system management. Accurate power load forecasting models are required to the operation and planning of a utility company, and they have received increasing attention from researches of this field study. Many mathematical methods have been developed for load forecasting. This work aims to develop and implement a load forecasting method for short-term load forecasting (STLF), based on Holt-Winters exponential smoothing and an artificial neural network (ANN). One of the main contributions of this paper is the application of Holt-Winters exponential smoothing approach to the forecasting problem and, as an evaluation of the past forecasting work, data mining techniques are also applied to short-term Load forecasting. Both ANN and Holt-Winters exponential smoothing approaches are compared and evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Text based on the paper presented at the Conference "Autonomous systems: inter-relations of technical and societal issues" held at Monte de Caparica (Portugal), Universidade Nova de Lisboa, November, 5th and 6th 2009 and organized by IET-Research Centre on Enterprise and Work Innovation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Herpetic infections are common complications in AIDS patients. The clinical features could be uncommon and antiviral chemotherapy is imperative. A rapid diagnosis could prevent incorrect approaches and treatment. The polymerase chain reaction is a rapid, specific and sensible method for DNA amplification and diagnosis of infectious diseases, especially viral diseases. This approach has some advantages compared with conventional diagnostic procedures. Recently we have reported a new PCR protocol to rapid diagnosis of herpetic infections with suppression of the DNA extraction step. In this paper we present a case of herpetic whitlow with rapid diagnosis by HSV-1 specific polymerase chain reaction using the referred protocol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current economic crisis has rushed even more the economists’ concerns to identify new directions for the sustainable development of the society. In this context, the human capital is crystallised as the key variable of the creative economy and of the knowledge-based society. As such, we have directed the research underlying this paper to identifying the most eloquent indicators of human capital to meet the demands of the knowledge-based society and sustainable development as well as towards achieving a comprehensive analysis of the human capital in the EU countries, respectively of a comparative analysis: Romania - Portugal. To carry out this paper, the methodology used is based on the interdisciplinary triangulation involving approaches from the perspective of human resource management, economy and economic statistics. The research techniques used consist of the content analysis and investigation of secondary data of international organisations accredited in the field of this research, such as: the United Nation Development Programme - Human Development Reports, World Bank - World Development Reports, International Labour Organisation, Eurostat, European Commission’s Eurobarometer surveys and reports on human capital. The research results emphasise both similarities and differences between the two countries under the comparative analysis and the main directions in which one has to invest for the development of human capital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis submitted to the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia, for the degree of Doctor of Philosophy in Biochemistry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia from Universidade Nova de Lisboa in partial fulfillment of the requirements for the obtention of the degree of Master of Science in Biotechnology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past decades several approaches for schedulability analysis have been proposed for both uni-processor and multi-processor real-time systems. Although different techniques are employed, very little has been put forward in using formal specifications, with the consequent possibility for mis-interpretations or ambiguities in the problem statement. Using a logic based approach to schedulability analysis in the design of hard real-time systems eases the synthesis of correct-by-construction procedures for both static and dynamic verification processes. In this paper we propose a novel approach to schedulability analysis based on a timed temporal logic with time durations. Our approach subsumes classical methods for uni-processor scheduling analysis over compositional resource models by providing the developer with counter-examples, and by ruling out schedules that cause unsafe violations on the system. We also provide an example showing the effectiveness of our proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Astringency is an organoleptic property of beverages and food products resulting mainly from the interaction of salivary proteins with dietary polyphenols. It is of great importance to consumers, but the only effective way of measuring it involves trained sensorial panellists, providing subjective and expensive responses. Concurrent chemical evaluations try to screen food astringency, by means of polyphenol and protein precipitation procedures, but these are far from the real human astringency sensation where not all polyphenol–protein interactions lead to the occurrence of precipitate. Here, a novel chemical approach that tries to mimic protein–polyphenol interactions in the mouth is presented to evaluate astringency. A protein, acting as a salivary protein, is attached to a solid support to which the polyphenol binds (just as happens when drinking wine), with subsequent colour alteration that is fully independent from the occurrence of precipitate. Employing this simple concept, Bovine Serum Albumin (BSA) was selected as the model salivary protein and used to cover the surface of silica beads. Tannic Acid (TA), employed as the model polyphenol, was allowed to interact with the BSA on the silica support and its adsorption to the protein was detected by reaction with Fe(III) and subsequent colour development. Quantitative data of TA in the samples were extracted by colorimetric or reflectance studies over the solid materials. The analysis was done by taking a regular picture with a digital camera, opening the image file in common software and extracting the colour coordinates from HSL (Hue, Saturation, Lightness) and RGB (Red, Green, Blue) colour model systems; linear ranges were observed from 10.6 to 106.0 μmol L−1. The latter was based on the Kubelka–Munk response, showing a linear gain with concentrations from 0.3 to 10.5 μmol L−1. In either of these two approaches, semi-quantitative estimation of TA was enabled by direct eye comparison. The correlation between the levels of adsorbed TA and the astringency of beverages was tested by using the assay to check the astringency of wines and comparing these to the response of sensorial panellists. Results of the two methods correlated well. The proposed sensor has significant potential as a robust tool for the quantitative/semi-quantitative evaluation of astringency in wine.