995 resultados para Strongly Extreme Point


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proceedings of the 1" R.C.A.N.S. Congress, Lisboa, October 1992

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study performs a sustainability evaluation of biodiesel from microalga Chlamydomonas sp. grown in 20 % (v/v) of brewery’s wastewater, blended with pentose sugars (xylose, arabinose or ribose resulting from the hydrolysis of brewer’s spent grains (BSG). The life cycle steps considered for the study are: microalgae cultivation, biomass processing and lipids extraction at the brewery site, and its conversion to biodiesel at a dedicated external biofuel’s plant. Three sustainability indicators (LCEE, FER and GW) were considered and calculated using experimental data. Literature data was used, whenever necessary, to complement life cycle data, thus allowing a more accurate sustainability evaluation. A comparative analysis of the biodiesel life cycle steps was also conducted, with the main goal of identifying which steps need to be improved. Results show that biomass processing, especially cell harvesting, microalgae cultivation, and lipids extraction are the main process bottlenecks. It is also analysed the influence on the microalgae biodiesel sustainability of adding each pentose sugar to the cultivation media, concluding that it strongly influences the biomass and lipid productivity. In particular, the addition of xylose is preferable in terms of lipid productivity, but from a sustainability point of view, ribose is the best, though the difference from xylose is not significant. Nevertheless, culture without pentose addition presents the best sustainability results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Apresentação realizada na LivingAll European Conference, em Valência, Espanha, de 15-16 janeiro de 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim is to examine the temporal trends of hip fracture incidence in Portugal by sex and age groups, and explore the relation with anti-osteoporotic medication. From the National Hospital Discharge Database, we selected from 1st January 2000 to 31st December 2008, 77,083 hospital admissions (77.4% women) caused by osteoporotic hip fractures (low energy, patients over 49 years-age), with diagnosis codes 820.x of ICD 9-CM. The 2001 Portuguese population was used as standard to calculate direct age-standardized incidence rates (ASIR) (100,000 inhabitants). Generalized additive and linear models were used to evaluate and quantify temporal trends of age specific rates (AR), by sex. We identified 2003 as a turning point in the trend of ASIR of hip fractures in women. After 2003, the ASIR in women decreased on average by 10.3 cases/100,000 inhabitants, 95% CI (− 15.7 to − 4.8), per 100,000 anti-osteoporotic medication packages sold. For women aged 65–69 and 75–79 we identified the same turning point. However, for women aged over 80, the year 2004 marked a change in the trend, from an increase to a decrease. Among the population aged 70–74 a linear decrease of incidence rate (95% CI) was observed in both sexes, higher for women: − 28.0% (− 36.2 to − 19.5) change vs − 18.8%, (− 32.6 to − 2.3). The abrupt turning point in the trend of ASIR of hip fractures in women is compatible with an intervention, such as a medication. The trends were different according to gender and age group, but compatible with the pattern of bisphosphonates sales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the middle of the first decade of this century, several authors have announced the dawn of a new Age, following the Information/ Knowledge Age (1970-2005?). We are certainly living in a Shift Age (Houle, 2007), but no standard designation has been broadly adopted so far, and others, such as Conceptual Age (Pink, 2005) or Social Age (Azua, 2009), are only some of the proposals to name current times. Due to the amount of information available nowadays, meaning making and understanding seem to be common features of this new age of change; change related to (i) how individuals and organizations engage with each other, to (ii) the way we deal with technology, to (iii) how we engage and communicate within communities to create meaning, i.e., also social networking-driven changes. The Web 2.0 and the social networks have strongly altered the way we learn, live, work and, of course, communicate. Within all the possible dimensions we could address this change, we chose to focus on language – a taken-for-granted communication tool, used, translated and recreated in personal and geographical variants, by the many users and authors of the social networks and other online communities and platforms. In this paper, we discuss how the Web 2.0, and specifically social networks, have contributed to changes in the communication process and, in bi- or multilingual environments, to the evolution and freeware use of the so called “international language”: English. Next, we discuss some of the impacts and challenges of this language diversity in international communication in the shift age of understanding and social networking, focusing on specialized networks. Then we point out some skills and strategies to avoid babelization and to build meaningful and effective content in mono or multilingual networks, through the use of common and shared concepts and designations in social network environments. For this purpose, we propose a social and collaborative approach to terminology management, as a shared, strategic and sense making tool for specialized communication in Web 2.0 environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For uniformly asymptotically affine (uaa) Markov maps on train tracks, we prove the following type of rigidity result: if a topological conjugacy between them is (uaa) at a point in the train track then the conjugacy is (uaa) everywhere. In particular, our methods apply to the case in which the domains of the Markov maps are Canter sets. We also present similar statements for (uaa:) and C-r Markov families. These results generalize the similar ones of Sullivan and de Faria for C-r expanding circle maps with r > 1 and have useful applications to hyperbolic dynamics on surfaces and laminations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For diffeomorphisms on surfaces with basic sets, we show the following type of rigidity result: if a topological conjugacy between them is differentiable at a point in the basic set then the conjugacy has a smooth extension to the surface. These results generalize the similar ones of D. Sullivan, E. de Faria and ours for one-dimensional expanding dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural robustness is an emergent concept related to the structural response to damage. At the present time, robustness is not well defined and much controversy still remains around this subject. Even if robustness has seen growing interest as a consequence of catastrophic consequences due to extreme events, the fact is that the concept can also be very useful when considered on more probable exposure scenarios such as deterioration, among others. This paper intends to be a contribution to the definition of structural robustness, especially in the analysis of reinforced concrete structures subjected to corrosion. To achieve this, first of all, several proposed robustness definitions and indicators and misunderstood concepts will be analyzed and compared. From this point and regarding a concept that could be applied to most type of structures and dam-age scenarios, a robustness definition is proposed. To illustrate the proposed concept, an example of corroded reinforced concrete structures will be analyzed using nonlinear analysis numerical methods based on a contin-uum strong discontinuities approach and isotropic damage models for concrete. Finally the robustness of the presented example will be assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1st ASPIC International Congress

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work describes a novel use for the polymeric film, poly(o-aminophenol) (PAP) that was made responsive to a specific protein. This was achieved through templated electropolymerization of aminophenol (AP) in the presence of protein. The procedure involved adsorbing protein on the electrode surface and thereafter electroploymerizing the aminophenol. Proteins embedded at the outer surface of the polymeric film were digested by proteinase K and then washed away thereby creating vacant sites. The capacity of the template film to specifically rebind protein was tested with myoglobin (Myo), a cardiac biomarker for ischemia. The films acted as biomimetic artificial antibodies and were produced on a gold (Au) screen printed electrode (SPE), as a step towards disposable sensors to enable point-of-care applications. Raman spectroscopy was used to follow the surface modification of the Au-SPE. The ability of the material to rebind Myo was measured by electrochemical techniques, namely electrochemical impedance spectroscopy (EIS) and square wave voltammetry (SWV). The devices displayed linear responses to Myo in EIS and SWV assays down to 4.0 and 3.5 μg/mL, respectively, with detection limits of 1.5 and 0.8 μg/mL. Good selectivity was observed in the presence of troponin T (TnT) and creatine kinase (CKMB) in SWV assays, and accurate results were obtained in applications to spiked serum. The sensor described in this work is a potential tool for screening Myo in point-of-care due to the simplicity of fabrication, disposability, short time response, low cost, good sensitivity and selectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A gold screen printed electrode (Au-SPE) was modified by merging Molecular Imprinting and Self-Assembly Monolayer techniques for fast screening cardiac biomarkers in point-of-care (POC). For this purpose, Myoglobin (Myo) was selected as target analyte and its plastic antibody imprinted over a glutaraldehyde (Glu)/cysteamine (Cys) layer on the gold-surface. The imprinting effect was produced by growing a reticulated polymer of acrylamide (AAM) and N,N′-methylenebisacrylamide (NNMBA) around the Myo template, covalently attached to the biosensing surface. Electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV) studies were carried out in all chemical modification steps to confirm the surface changes in the Au-SPE. The analytical features of the resulting biosensor were studied by different electrochemical techniques, including EIS, square wave voltammetry (SWV) and potentiometry. The limits of detection ranged from 0.13 to 8 μg/mL. Only potentiometry assays showed limits of detection including the cut-off Myo levels. Quantitative information was also produced for Myo concentrations ≥0.2 μg/mL. The linear response of the biosensing device showed an anionic slope of ~70 mV per decade molar concentration up to 0.3 μg/mL. The interference of coexisting species was tested and good selectivity was observed. The biosensor was successfully applied to biological fluids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.