956 resultados para Self-organizing feature map
Resumo:
This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Os programas de melhoria contínua dos processos são cada vez mais a aposta das empresas para fazer face ao mercado. Através da implementação destes programas é possível conferir simplicidade e padronização aos processos e consequentemente reduzir os custos com desperdícios internos relacionados com a qualidade dos mesmos. As ferramentas de melhoria da qualidade e as ferramentas associadas ao Lean Thinking representam um pilar importante no sucesso de qualquer programa de melhoria contínua dos processos. Estas ferramentas constituem meios úteis na análise, controlo, organização de dados importantes para a correta tomada de decisão nas organizações. O presente projeto tem como principal objetivo a conceção e implementação de um programa de melhoria da qualidade na Eurico Ferreira, S.A., tendo por base a avaliação da satisfação do cliente e a aplicação dos 5S. Neste contexto, o trabalho teve como fundamentação teórica a Gestão da Qualidade, Lean Thinking e algumas ferramentas de ambas as matérias. Posteriormente foi selecionada a área de negócio da empresa a abordar. Após a seleção, realizou-se um diagnóstico inicial do processo identificando os diversos pontos de melhoria onde foram aplicadas algumas ferramentas do Lean Thinking, nomeadamente o Value Stream Mapping e a metodologia 5S. Com a primeira foi possível construir um mapa do estado atual do processo, no qual estavam representados todos os intervenientes assim como o fluxo de materiais e de informação ao longo do processo. A metodologia 5S permitiu atuar sobre os desperdícios, identificando e implementando diversas melhorias no processo. Concluiu-se que a implementação das ferramentas contribuiu eficientemente para a melhoria contínua da qualidade nos processos, tendo sido decisão da coordenação alargar o âmbito do projeto aos restantes armazéns do centro logístico da empresa. Pode afirmar-se com recurso à satisfação do cliente expressa através da evolução favorável do Service-level agreement que as ferramentas implementadas têm gerado resultados muito positivos no curto prazo.
Resumo:
Current Manufacturing Systems challenges due to international economic crisis, market globalization and e-business trends, incites the development of intelligent systems to support decision making, which allows managers to concentrate on high-level tasks management while improving decision response and effectiveness towards manufacturing agility. This paper presents a novel negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence (SI) is considered a general aggregation term for several computational techniques, which use ideas and inspiration from the social behaviors of insects and other biological systems. This work is primarily concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. Experimental analysis was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. Empirical results and statistical evidence illustrate that the negotiation mechanism influence significantly the overall system performance and the effectiveness of Artificial Bee Colony for makespan minimization and on the machine occupation maximization.
Resumo:
New homoditopic bis-calix[4]arene-carbazole conjugates, armed with hydrophilic carboxylic acid functions at their lower rims, are disclosed. Evidence for their self-association in solution was gathered from solvatochromic and thermochromic studies, as well as from gel-permeation chromatography analysis. Their ability to function as highly sensitive sensors toward polar electron-deficient aromatic compounds is demonstrated.
Resumo:
Power law (PL) distributions have been largely reported in the modeling of distinct real phenomena and have been associated with fractal structures and self-similar systems. In this paper, we analyze real data that follows a PL and a double PL behavior and verify the relation between the PL coefficient and the capacity dimension of known fractals. It is to be proved a method that translates PLs coefficients into capacity dimension of fractals of any real data.
Resumo:
This essay offers a reflection on the concepts of identity and personal narrative, a line of argument that is closely interlaced with a subject‘s capacity to self-representation. As self-representation is necessarily composed upon remembrance processes, the question of memory as an element that directly influences the formation of an individual‘s identity becomes an emergent topic. Bearing this objective in mind, I shall highlight the notion of biographic continuity, the ability to elaborate a personal narrative, as an essential prerogative to attain a sense of identitary cohesion and coherence. On the other hand, I will argue that not only experienced memories play a key role in this process; intermediated, received narratives from the past, memories transmitted either symbolically or by elder members of the group or, what has been meanwhile termed ―postmemory‖, also influence the development of an individual‘s identitary map. This theoretical framework will be illustrated with the novel Paul Schatz im Uhrenkasten, written by German post-Holocaust author Jan Koneffke.
Resumo:
With the help of a unique combination of density functional theory and computer simulations, we discover two possible scenarios, depending on concentration, for the hierarchical self-assembly of magnetic nanoparticles on cooling. We show that typically considered low temperature clusters, i.e. defect-free chains and rings, merge into more complex branched structures through only three types of defects: four-way X junctions, three-way Y junctions and two-way Z junctions. Our accurate calculations reveal the predominance of weakly magnetically responsive rings cross-linked by X defects at the lowest temperatures. We thus provide a strategy to fine-tune magnetic and thermodynamic responses of magnetic nanocolloids to be used in medical and microfluidics applications.
Resumo:
The basic objective of this work is to evaluate the durability of self-compacting concrete (SCC) produced in binary and ternary mixes using fly ash (FA) and limestone filler (LF) as partial replacement of cement. The main characteristics that set SCC apart from conventional concrete (fundamentally its fresh state behaviour) essentially depend on the greater or lesser content of various constituents, namely: greater mortar volume (more ultrafine material in the form of cement and mineral additions); proper control of the maximum size of the coarse aggregate; use of admixtures such as superplasticizers. Significant amounts of mineral additions are thus incorporated to partially replace cement, in order to improve the workability of the concrete. These mineral additions necessarily affect the concrete’s microstructure and its durability. Therefore, notwithstanding the many well-documented and acknowledged advantages of SCC, a better understanding its behaviour is still required, in particular when its composition includes significant amounts of mineral additions. An ambitious working plan was devised: first, the SCC’s microstructure was studied and characterized and afterwards the main transport and degradation mechanisms of the SCC produced were studied and characterized by means of SEM image analysis, chloride migration, electrical resistivity, and carbonation tests. It was then possible to draw conclusions about the SCC’s durability. The properties studied are strongly affected by the type and content of the additions. Also, the use of ternary mixes proved to be extremely favourable, confirming the expected beneficial effect of the synergy between LF and FA. © 2015 RILEM.
Resumo:
In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.
Resumo:
Feature discretization (FD) techniques often yield adequate and compact representations of the data, suitable for machine learning and pattern recognition problems. These representations usually decrease the training time, yielding higher classification accuracy while allowing for humans to better understand and visualize the data, as compared to the use of the original features. This paper proposes two new FD techniques. The first one is based on the well-known Linde-Buzo-Gray quantization algorithm, coupled with a relevance criterion, being able perform unsupervised, supervised, or semi-supervised discretization. The second technique works in supervised mode, being based on the maximization of the mutual information between each discrete feature and the class label. Our experimental results on standard benchmark datasets show that these techniques scale up to high-dimensional data, attaining in many cases better accuracy than existing unsupervised and supervised FD approaches, while using fewer discretization intervals.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.
Resumo:
The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.
Resumo:
The S100 proteins are 10-12 kDa EF-hand proteins that act as central regulators in a multitude of cellular processes including cell survival, proliferation, differentiation and motility. Consequently, many S100 proteins are implicated and display marked changes in their expression levels in many types of cancer, neurodegenerative disorders, inflammatory and autoimmune diseases. The structure and function of S100 proteins are modulated by metal ions via Ca2+ binding through EF-hand motifs and binding of Zn2+ and Cu2+ at additional sites, usually at the homodimer interfaces. Ca2+ binding modulates S100 conformational opening and thus promotes and affects the interaction with p53, the receptor for advanced glycation endproducts and Toll-like receptor 4, among many others. Structural plasticity also occurs at the quaternary level, where several S100 proteins self-assemble into multiple oligomeric states, many being functionally relevant. Recently, we have found that the S100A8/A9 proteins are involved in amyloidogenic processes in corpora amylacea of prostate cancer patients, and undergo metal-mediated amyloid oligomerization and fibrillation in vitro. Here we review the unique chemical and structural properties of S100 proteins that underlie the conformational changes resulting in their oligomerization upon metal ion binding and ultimately in functional control. The possibility that S100 proteins have intrinsic amyloid-forming capacity is also addressed, as well as the hypothesis that amyloid self-assemblies may, under particular physiological conditions, affect the S100 functions within the cellular milieu.