984 resultados para Convergence model


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we analytically solve the Hindmarsh-Rose model (Proc R Soc Lond B221:87-102, 1984) by means of a technique developed for strongly nonlinear problems-the step homotopy analysis method. This analytical algorithm, based on a modification of the standard homotopy analysis method, allows us to obtain a one-parameter family of explicit series solutions for the studied neuronal model. The Hindmarsh-Rose system represents a paradigmatic example of models developed to qualitatively reproduce the electrical activity of cell membranes. By using the homotopy solutions, we investigate the dynamical effect of two chosen biologically meaningful bifurcation parameters: the injected current I and the parameter r, representing the ratio of time scales between spiking (fast dynamics) and resting (slow dynamics). The auxiliary parameter involved in the analytical method provides us with an elegant way to ensure convergent series solutions of the neuronal model. Our analytical results are found to be in excellent agreement with the numerical simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

33rd IAHR Congress: Water Engineering for a Sustainable Environment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica, Especialidade de Sistemas Digitais, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comunicação apresentada no 8º Congresso Nacional de Administração Pública - Desafios e Soluções, em Carcavelos de 21 a 22 de Novembro de 2011.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical Issues in Environmental Taxation: International and Comparative Perspectives: Volume VI, 699-715

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the constraints on the two Higgs doublet model (2HDM) due to the stability of the scalar potential and absence of Landau poles at energy scales below the Planck scale. We employ the most general 2HDM that incorporates an approximately Standard Model (SM) Higgs boson with a flavor aligned Yukawa sector to eliminate potential tree-level Higgs-mediated flavor changing neutral currents. Using basis independent techniques, we exhibit robust regimes of the 2HDM parameter space with a 125 GeV SM-like Higgs boson that is stable and perturbative up to the Planck scale. Implications for the heavy scalar spectrum are exhibited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The efficacy of flucytosine (5-FC) and fluconazole (FLU) association in the treatment of a murine experimental model of cryptococcosis, was evaluated. Seven groups of 10 Balb C mice each, were intraperitoneally inoculated with 10(7) cells of Cryptococcus neoformans. Six groups were allocated to receive 5-FC (300 mg/kg) and FLU (16 mg/ kg), either combined and individually, by daily gavage beginning 5 days after the infection, for 2 and 4 weeks. One group received distilled water and was used as control. The evaluation of treatments was based on: survival time; macroscopic examination of brain, lungs, liver and spleen at autopsy; presence of capsulated yeasts in microscopic examination of wet preparations of these organs and cultures of brain homogenate. 5-FC and FLU, individually or combined, significantly prolonged the survival time of the treated animals with respect to the control group (p<0.01). Animals treated for 4 weeks survived significantly longer than those treated for 2 weeks (p<0.01). No significant differences between the animals treated with 5-FC and FLU combined or separately were observed in the survival time and morphological parameters. The association of 5-FC and FLU does not seem to be more effective than 5-FC or FLU alone, in the treatment of this experimental model of cryptococcosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Engenharia do Ambiente pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The minimum interval graph completion problem consists of, given a graph G = ( V, E ), finding a supergraph H = ( V, E ∪ F ) that is an interval graph, while adding the least number of edges |F| . We present an integer programming formulation for solving the minimum interval graph completion problem recurring to a characteri- zation of interval graphs that produces a linear ordering of the maximal cliques of the solution graph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Order picking consists in retrieving products from storage locations to satisfy independent orders from multiple customers. It is generally recognized as one of the most significant activities in a warehouse (Koster et al, 2007). In fact, order picking accounts up to 50% (Frazelle, 2001) or even 80% (Van den Berg, 1999) of the total warehouse operating costs. The critical issue in today’s business environment is to simultaneously reduce the cost and increase the speed of order picking. In this paper, we address the order picking process in one of the Portuguese largest companies in the grocery business. This problem was proposed at the 92nd European Study Group with Industry (ESGI92). In this setting, each operator steers a trolley on the shop floor in order to select items for multiple customers. The objective is to improve their grocery e-commerce and bring it up to the level of the best international practices. In particular, the company wants to improve the routing tasks in order to decrease distances. For this purpose, a mathematical model for a faster open shop picking was developed. In this paper, we describe the problem, our proposed solution as well as some preliminary results and conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente trabalho descreve a análise feita a um veículo de todo o terreno. O kartcross/buggy em estudo é usado em provas do tipo Baja, sendo estas provas longas e com traçados sinuosos. O veículo, já construído, foi testado através de softwares, a nível estrutural e ciclístico, pretendendo-se assim efetuar engenharia inversa sobre o mesmo. No decorrer da sua utilização normal o kartcross/buggy sofre vários tipos de solicitações, como sejam aceleração, travagem e força centrípta em curva. Portanto, o veículo deve ser capaz de suportar estes esforços e ter uma boa habilidade. Além dos testes em uso corrente foi analisada também a rigidez torsional do quadro do veículo e do veículo completo, podendo-se assim melhorar estes valores. A nível ciclístico foram analisados os parâmetros das suspensões como o camber, convergência/divergência, caster, entre outros. Da análise destes parâmetros e possível fazerem-se melhorias de forma a que o veículo tenha um melhor desempenho. Para validar os testes computacionais efetuados foi reproduzido experimentalmente o teste da rigidez torsional. No final, compararam-se os valores numéricos com os experimentais e aferir se o modelo se encontra bem representado.