967 resultados para structure based alignments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE Electron Device Letters, VOL. 29, NO. 9,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

15th International Conference on Mixed Design of Integrated Circuits and Systems, pp. 177 – 180, Poznan, Polónia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure and nature of the crust underlying the Santos Basin-São Paulo Plateau System (SSPS), in the SE Brazilian margin, are discussed based on five wide-angle seismic profiles acquired during the Santos Basin (SanBa) experiment in 2011. Velocity models allow us to precisely divide the SSPS in six domains from unthinned continental crust (Domain CC) to normal oceanic crust (Domain OC). A seventh domain (Domain D), a triangular shape region in the SE of the SSPS, is discussed by Klingelhoefer et al. (2014). Beneath the continental shelf, a similar to 100km wide necking zone (Domain N) is imaged where the continental crust thins abruptly from similar to 40km to less than 15km. Toward the ocean, most of the SSPS (Domains A and C) shows velocity ranges, velocity gradients, and a Moho interface characteristic of the thinned continental crust. The central domain (Domain B) has, however, a very heterogeneous structure. While its southwestern part still exhibits extremely thinned (7km) continental crust, its northeastern part depicts a 2-4km thick upper layer (6.0-6.5km/s) overlying an anomalous velocity layer (7.0-7.8km/s) and no evidence of a Moho interface. This structure is interpreted as atypical oceanic crust, exhumed lower crust, or upper continental crust intruded by mafic material, overlying either altered mantle in the first two cases or intruded lower continental crust in the last case. The deep structure and v-shaped segmentation of the SSPS confirm that an initial episode of rifting occurred there obliquely to the general opening direction of the South Atlantic Central Segment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with a hierarchical structure composed by an event-based supervisor in a higher level and two distinct proportional integral (PI) controllers in a lower level. The controllers are applied to a variable speed wind energy conversion system with doubly-fed induction generator, namely, the fuzzy PI control and the fractional-order PI control. The event-based supervisor analyses the operation state of the wind energy conversion system among four possible operational states: park, start-up, generating or brake and sends the operation state to the controllers in the lower level. In start-up state, the controllers only act on electric torque while pitch angle is equal to zero. In generating state, the controllers must act on the pitch angle of the blades in order to maintain the electric power around the nominal value, thus ensuring that the safety conditions required for integration in the electric grid are met. Comparisons between fuzzy PI and fractional-order PI pitch controllers applied to a wind turbine benchmark model are given and simulation results by Matlab/Simulink are shown. From the results regarding the closed loop point of view, fuzzy PI controller allows a smoother response at the expense of larger number of variations of the pitch angle, implying frequent switches between operational states. On the other hand fractional-order PI controller allows an oscillatory response with less control effort, reducing switches between operational states. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Cardiovascular diseases and other non-communicable diseases are major causes of morbidity and mortality, responsible for 38 million deaths in 2012, 75 % occurring in low- and middle-income countries. Most of these countries are facing a period of epidemiological transition, being confronted with an increased burden of non-communicable diseases, which challenge health systems mainly designed to deal with infectious diseases. With the adoption of the World Health Organization “Global Action Plan for the Prevention and Control of non-communicable diseases, 2013–2020”, the national dimension of risk factors for non-communicable diseases must be reported on a regular basis. Angola has no national surveillance system for non-communicable diseases, and periodic population-based studies can help to overcome this lack of information. CardioBengo will collect information on risk factors, awareness rates and prevalence of symptoms relevant to cardiovascular diseases, to assist decision makers in the implementation of prevention and treatment policies and programs. Methods: CardioBengo is designed as a research structure that comprises a cross-sectional component, providing baseline information and the assembling of a cohort to follow-up the dynamics of cardiovascular diseases risk factors in the catchment area of the Dande Health and Demographic Surveillance System of the Health Research Centre of Angola, in Bengo Province, Angola. The World Health Organization STEPwise approach to surveillance questionnaires and procedures will be used to collect information on a representative sex-age stratified sample, aged between 15 and 64 years old. Discussion: CardioBengo will recruit the first population cohort in Angola designed to evaluate cardiovascular diseases risk factors. Using the structures in place of the Dande Health and Demographic Surveillance System and a reliable methodology that generates comparable results with other regions and countries, this study will constitute a useful tool for the surveillance of cardiovascular diseases. Like all longitudinal studies, a strong concern exists regarding dropouts, but strategies like regular visits to selected participants and a strong community involvement are in place to minimize these occurrences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sandwich structures with soft cores are widely used in applications where a high bending stiffness is required without compromising the global weight of the structure, as well as in situations where good thermal and damping properties are important parameters to observe. As equivalent single layer approaches are not the more adequate to describe realistically the kinematics and the stresses distributions as well as the dynamic behaviour of this type of sandwiches, where shear deformations and the extensibility of the core can be very significant, layerwise models may provide better solutions. Additionally and in connection with this multilayer approach, the selection of different shear deformation theories according to the nature of the material that constitutes the core and the outer skins can predict more accurately the sandwich behaviour. In the present work the authors consider the use of different shear deformation theories to formulate different layerwise models, implemented through kriging-based finite elements. The viscoelastic material behaviour, associated to the sandwich core, is modelled using the complex approach and the dynamic problem is solved in the frequency domain. The outer elastic layers considered in this work may also be made from different nanocomposites. The performance of the models developed is illustrated through a set of test cases. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solution enthalpies of 18-crown-6 have been obtained for a set of 14 protic and aprotic solvents at 298.15 K. The complementary use of Solomonov's methodology and a QSPR-based approach allowed the identification of the most significant solvent descriptors that model the interaction enthalpy contribution of the solution process (Delta H-int(A/S)). Results were compared with data previously obtained for 1,4-dioxane. Although the interaction enthalpies of 18-crown-6 correlate well with those of 1,4-dioxane, the magnitude of the most relevant parameters, pi* and beta, is almost three times higher for 18-crown-6. This is rationalized in terms of the impact of the solute's volume in the solution processes of both compounds. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Mecânica Especialização em Concepção e Produção

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coupling five rigid or flexible bis(pyrazolato)based tectons with late transition metal ions allowed us to isolate 18 coordination polymers (CPs). As assessed by thermal analysis, all of them possess a remarkable thermal stability, their decomposition temperatures lying in the range of 340-500 degrees C. As demonstrated by N-2 adsorption measurements at 77 K, their Langmuir specific surface areas span the rather vast range of 135-1758 m(2)/g, in agreement with the porous or dense polymeric architectures retrieved by powder X-ray diffraction structure solution methods. Two representative families of CPs, built up with either rigid or flexible spacers, were tested as catalysts in (0 the microwave-assisted solvent-free peroxidative oxidation of alcohols by t-BuOOH, and (ii) the peroxidative oxidation of cydohexane to cydohexanol and cydohexanone by H2O2 in acetonitrile. Those CPs bearing the rigid spacer, concurrently possessing higher specific surface areas, are more active than the corresponding ones with the flexible spacer. Moreover, the two copper(I)-containing CPs investigated exhibit the highest efficiency in both reactions, leading selectively to a maximum product yield of 92% (and TON up to 1.5 x 10(3)) in the oxidation of 1-phenylethanol and of 11% in the oxidation of cydohexane, the latter value being higher than that granted by the current industrial process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study we report the results of an analysis, based on serotyping, multilocus enzyme electrophoresis (MEE), and ribotyping of N. meningitidis serogroup C strains isolated from patients with meningococcal disease (MD) in Rio Grande do Sul (RS) and Santa Catarina (SC) States, Brazil, as the Center of Epidemiology Control of Ministry of Health detected an increasing of MD cases due to this serogroup in the last two years (1992-1993). We have demonstrated that the MD due to N.meningitidis serogroup C strains in RS and SC States occurring in the last 4 years were caused mainly by one clone of strains (ET 40), with isolates indistinguishable by serogroup, serotype, subtype and even by ribotyping. One small number of cases that were not due to an ET 40 strains, represent closely related clones that probably are new lineages generated from the ET 40 clone referred as ET 11A complex. We have also analyzed N.meningitidis serogroup C strains isolated in the greater São Paulo in 1976 as representative of the first post epidemic year in that region. The ribotyping method, as well as MEE, could provide useful information about the clonal characteristics of those isolates and also of strains isolated in south Brazil. The strains from 1976 have more similarity with the actual endemic than epidemic strains, by the ribotyping, sulfonamide sensitivity, and MEE results. In conclusion, serotyping with monoclonal antibodies (C:2b:P1.3), MEE (ET 11 and ET 11A complex), and ribotyping by using ClaI restriction enzyme (Rb2), were useful to characterize these epidemic strains of N.meningitidis related to the increased incidence of MD in different States of south Brazil. It is mostly probable that these N.meningitidis serogroup C strains have poor or no genetic corelation with 1971-1975 epidemic serogroup C strains. The genetic similarity of members of the ET 11 and ET 11A complex were confirmed by the ribotyping method by using three restriction endonucleases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The positioning of the consumers in the power systems operation has been changed in the recent years, namely due to the implementation of competitive electricity markets. Demand response is an opportunity for the consumers’ participation in electricity markets. Smart grids can give an important support for the integration of demand response. The methodology proposed in the present paper aims to create an improved demand response program definition and remuneration scheme for aggregated resources. The consumers are aggregated in a certain number of clusters, each one corresponding to a distinct demand response program, according to the economic impact of the resulting remuneration tariff. The knowledge about the consumers is obtained from its demand price elasticity values. The illustrative case study included in the paper is based on a 218 consumers’ scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering