987 resultados para spectral region selection
Resumo:
Materials selection is a matter of great importance to engineering design and software tools are valuable to inform decisions in the early stages of product development. However, when a set of alternative materials is available for the different parts a product is made of, the question of what optimal material mix to choose for a group of parts is not trivial. The engineer/designer therefore goes about this in a part-by-part procedure. Optimizing each part per se can lead to a global sub-optimal solution from the product point of view. An optimization procedure to deal with products with multiple parts, each with discrete design variables, and able to determine the optimal solution assuming different objectives is therefore needed. To solve this multiobjective optimization problem, a new routine based on Direct MultiSearch (DMS) algorithm is created. Results from the Pareto front can help the designer to align his/hers materials selection for a complete set of materials with product attribute objectives, depending on the relative importance of each objective.
Resumo:
Let F be a field with at least four elements. In this paper, we identify all the pairs (A, B) of n x n nonsingular matrices over F, satisfying the following property: for every monic polynomial f (x) = x(n) + a(n-1)x(n-1) +... + a(1)x + a(0) over F, with a root in F and a(0) = (-1)(n) det(AB), there are nonsingular matrices X, Y is an element of F-nxn such that XAX(-1)Y BY-1 has characteristic polynomial f (x).
Resumo:
For an interval map, the poles of the Artin-Mazur zeta function provide topological invariants which are closely connected to topological entropy. It is known that for a time-periodic nonautonomous dynamical system F with period p, the p-th power [zeta(F) (z)](p) of its zeta function is meromorphic in the unit disk. Unlike in the autonomous case, where the zeta function zeta(f)(z) only has poles in the unit disk, in the p-periodic nonautonomous case [zeta(F)(z)](p) may have zeros. In this paper we introduce the concept of spectral invariants of p-periodic nonautonomous discrete dynamical systems and study the role played by the zeros of [zeta(F)(z)](p) in this context. As we will see, these zeros play an important role in the spectral classification of these systems.
Resumo:
Toxoplasmosis is a highly prevalent zoonotic human infection caused by the Apicomplexa protozoon Toxoplasma gondii. The acute disease is usually mild or asymptomatic, except for foetal infection transmitted by acutely infected pregnant women, which courses as a devastating disease. In order to determine possible regional variations in risk factors, we studied the frequency of seronegativity in areas of the São Paulo Metropolitan Region, comparing liters and age groups. The prevalence of seronegativity was determined retrospectively in 1286 pregnant women receiving prenatal care at public health services in four selected areas of the São Paulo Metropolitan Region of similar socioeconomic background. The São Paulo City area had the higher frequency of seronegativity (41.1%), followed by the Northwest (31.5%) and Southwest (29.9%) areas, with similar intermediate levels, and by the Northeast (22.5%) area with the lowest frequency (p<0.001). A rough estimate disclosed about 280 infected infants/year in the São Paulo Metropolitan Region. Serological titers analyzed by age group suggested a decline in antibody levels with age, as shown by a lower frequency of higher titers in older groups. Our study emphasizes the importance of determining the regional prevalence of toxoplasmosis for proper planning of public health prenatal care.
Resumo:
Infrared spectroscopy, either in the near and mid (NIR/MIR) region of the spectra, has gained great acceptance in the industry for bioprocess monitoring according to Process Analytical Technology, due to its rapid, economic, high sensitivity mode of application and versatility. Due to the relevance of cyprosin (mostly for dairy industry), and as NIR and MIR spectroscopy presents specific characteristics that ultimately may complement each other, in the present work these techniques were compared to monitor and characterize by in situ and by at-line high-throughput analysis, respectively, recombinant cyprosin production by Saccharomyces cerevisiae. Partial least-square regression models, relating NIR and MIR-spectral features with biomass, cyprosin activity, specific activity, glucose, galactose, ethanol and acetate concentration were developed, all presenting, in general, high regression coefficients and low prediction errors. In the case of biomass and glucose slight better models were achieved by in situ NIR spectroscopic analysis, while for cyprosin activity and specific activity slight better models were achieved by at-line MIR spectroscopic analysis. Therefore both techniques enabled to monitor the highly dynamic cyprosin production bioprocess, promoting by this way more efficient platforms for the bioprocess optimization and control.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.
Resumo:
Mainland Portugal, on the southwestern edge of the European continent, is located directly north of the boundary between the Eurasian and Nubian plates. It lies in a region of slow lithospheric deformation (< 5 mm yr(-1)), which has generated some of the largest earthquakes in Europe, both intraplate (mainland) and interplate (offshore). Some offshore earthquakes are nucleated on old and cold lithospheric mantle, at depths down to 60 km. The seismicity of mainland Portugal and its adjacent offshore has been repeatedly classified as diffuse. In this paper, we analyse the instrumental earthquake catalogue for western Iberia, which covers the period between 1961 and 2013. Between 2010 and 2012, the catalogue was enriched with data from dense broad-band deployments. We show that although the plate boundary south of Portugal is diffuse, in that deformation is accommodated along several distributed faults rather than along one long linear plate boundary, the seismicity itself is not diffuse. Rather, when located using high-quality data, earthquakes collapse into well-defined clusters and lineations. We identify and characterize the most outstanding clusters and lineations of epicentres and correlate them with geophysical and tectonic features (historical seismicity, topography, geologically mapped faults, Moho depth, free-air gravity, magnetic anomalies and geotectonic units). Both onshore and offshore, clusters and lineations of earthquakes are aligned preferentially NNE-SSW and WNW-ESE. Cumulative seismic moment and epicentre density decrease from south to north, with increasing distance from the plate boundary. Only few earthquake lineations coincide with geologically mapped faults. Clusters and lineations that do not match geologically mapped faults may correspond to previously unmapped faults (e.g. blind faults), rheological boundaries or distributed fracturing inside blocks that are more brittle and therefore break more easily than neighbour blocks. The seismicity map of western Iberia presented in this article opens important questions concerning the regional seismotectonics. This work shows that the study of low-magnitude earthquakes using dense seismic deployments is a powerful tool to study lithospheric deformation in slowly deforming regions, such as western Iberia, where high-magnitude earthquakes occur with long recurrence intervals.
Resumo:
Comunicação apresentada no Congresso do IIAS-IISA no âmbito do IX Grupo de Estudo: Serviço público e política, realizado em Ifrane, Marrocos de 13 a 17 de junho de 2014
Resumo:
The choice of an information systems is a critical factor of success in an organization's performance, since, by involving multiple decision-makers, with often conflicting objectives, several alternatives with aggressive marketing, makes it particularly complex by the scope of a consensus. The main objective of this work is to make the analysis and selection of a information system to support the school management, pedagogical and administrative components, using a multicriteria decision aid system – MMASSITI – Multicriteria Method- ology to Support the Selection of Information Systems/Information Technologies – integrates a multicriteria model that seeks to provide a systematic approach in the process of choice of Information Systems, able to produce sustained recommendations concerning the decision scope. Its application to a case study has identi- fied the relevant factors in the selection process of school educational and management information system and get a solution that allows the decision maker’ to compare the quality of the various alternatives.
Resumo:
We report data related to arbovirus antibodies detected in wild birds periodically captured from January 1978 to December 1990 in the counties of Salesópolis (Casa Grande Station), Itapetininga and Ribeira Valley, considering the different capture environments. Plasmas were examined using hemagglutination-inhibition (HI) tests. Only monotypic reactions were considered, except for two heterotypic reactions in which a significant difference in titer was observed for a determined virus of the same antigenic group. Among a total of 39,911 birds, 269 birds (0.7%) belonging to 66 species and 22 families were found to have a monotypic reaction for Eastern equine encephalitis (EEE), Venezuelan equine encephalitis (VEE), Western equine encephalitis (WEE), Ilheus (ILH), Rocio (ROC), St. Louis encephalitis (SLE), SP An 71686, or Caraparu (CAR) viruses. Analysis of the data provided information of epidemiologic interest with respect to these agents. Birds with positive serology were distributed among different habitats, with a predominance of unforested habitats. The greatest diversity of positive reactions was observed among species which concentrate in culture fields.
Resumo:
Over the past decade, scientists have been called to participate more actively in public education and outreach (E&O). This is particularly true in fields of significant societal impact, such as earthquake science. Local earthquake risk culture plays a role in the way that the public engages in educational efforts. In this article, we describe an adapted E&O program for earthquake science and risk. The program is tailored for a region of slow tectonic deformation, where large earthquakes are extreme events that occur with long return periods. The adapted program has two main goals: (1) to increase the awareness and preparedness of the population to earthquake and related risks (tsunami, liquefaction, fires, etc.), and (2) to increase the quality of earthquake science education, so as to attract talented students to geosciences. Our integrated program relies on activities tuned for different population groups who have different interests and abilities, namely young children, teenagers, young adults, and professionals.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
B. tenagophila snails from Ouro Branco, MG, showed positivity for S. mansoni, with infection rates of 5%, 10%, (SJ strain), and 1% (LE strain) using a pool of miracidia. The mollusks were found to be susceptive from the 3rd generation reared in laboratory onwards. The B. tenagophila (OB, MG) when individually exposed to 10 miracidia, showed infection rate of 2% for LE strain. B. glabrata snails from Gagé, MG, showed a positivity rate of 58% for S. mansoni (LE strain), under experimental conditions. The B. tenagophila from Cabo Frio, RJ and B. glabrata from Belo Horizonte, MG used as a control for SJ strain showed infection rates of 47% - 85% and 36% respectivily. For the LE strain, B. glabrata (BH, MG) used as control showed infection rate of 40% - 75%.
Resumo:
The main result of this work is a new criterion for the formation of good clusters in a graph. This criterion uses a new dynamical invariant, the performance of a clustering, that characterizes the quality of the formation of clusters. We prove that the growth of the dynamical invariant, the network topological entropy, has the effect of worsening the quality of a clustering, in a process of cluster formation by the successive removal of edges. Several examples of clustering on the same network are presented to compare the behavior of other parameters such as network topological entropy, conductance, coefficient of clustering and performance of a clustering with the number of edges in a process of clustering by successive removal.
Resumo:
The main objective of this work is to report on the development of a multi-criteria methodology to support the assessment and selection of an Information System (IS) framework in a business context. The objective is to select a technological partner that provides the engine to be the basis for the development of a customized application for shrinkage reduction on the supply chains management. Furthermore, the proposed methodology di ers from most of the ones previously proposed in the sense that 1) it provides the decision makers with a set of pre-defined criteria along with their description and suggestions on how to measure them and 2)it uses a continuous scale with two reference levels and thus no normalization of the valuations is required. The methodology here proposed is has been designed to be easy to understand and use, without a specific support of a decision making analyst.