26 resultados para Linear patterns


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chpater in Book Proceedings with Peer Review Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005, Proceedings, Part II

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Philosophical Magazine Letters Volume 88, Issue 9-10, 2008 Special Issue: Solid and Liquid Foams. In commemoration of Manuel Amaral Fortes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use a simple model of associating fluids which consists of spherical particles having a hard-core repulsion, complemented by three short-ranged attractive sites on the surface (sticky spots). Two of the spots are of type A and one is of type B; the bonding interactions between each pair of spots have strengths epsilon(AA), epsilon(BB), and epsilon(AB). The theory is applied over the whole range of bonding strengths and the results are interpreted in terms of the equilibrium cluster structures of the phases. In addition to our numerical results, we derive asymptotic expansions for the free energy in the limits for which there is no liquid-vapor critical point: linear chains (epsilon(AA)not equal 0, epsilon(AB)=epsilon(BB)=0), hyperbranched polymers (epsilon(AB)not equal 0, epsilon(AA)=epsilon(BB)=0), and dimers (epsilon(BB)not equal 0, epsilon(AA)=epsilon(AB)=0). These expansions also allow us to calculate the structure of the critical fluid by perturbing around the above limits, yielding three different types of condensation: of linear chains (AA clusters connected by a few AB or BB bonds); of hyperbranched polymers (AB clusters connected by AA bonds); or of dimers (BB clusters connected by AA bonds). Interestingly, there is no critical point when epsilon(AA) vanishes despite the fact that AA bonds alone cannot drive condensation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada à escola Superior de Educação de Lisboa para obtenção de grau de mestre em Educação Matemática na Educação Pré-Escolar e nos 1º e 2º Ciclos do Ensino Básico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents part of a study that aimed to understand how the emergence of algebraic thinking takes place in a group of four-year-old children, as well as its relationship to the exploration of children‘s literature. To further deepen and guide this study the following research questions were formulated: (1) How can children's literature help preschoolers identify patterns?; (2) What strategies and thinking processes do children use to create, analyze and generalize repeating and growing patterns?; (3) What strategies do children use to identify the unit of repeat of a pattern? and (4) What factors influence the identification of patterns? The paper focuses only on the strategies and thinking processes that children use to create, analyze and generalize repeating patterns. The present study was developed with a group of 14 preschoolers in a private school in Lisbon, and it was carried out with all children. In order to develop the research, a qualitative research methodology under the interpretive paradigm was chosen, emphasizing meanings and processes. The researcher took the dual role of teacher-researcher, conducting the study with her own group and in her own natural environment. Participant observation and document analysis (audio and video recordings, photos and children productions) were used as data collection methods. Data collection took place from October 2013 to April 2014. The results of the study indicate that children master the concept of repeating patterns, and they are able to identify the unit of repeat, create and analyze various repeating patterns, evolving from simpler to more complex forms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the design and compares the performance of linear, decoupled and direct power controllers (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFC). A simplified steady-state model of the matrix converter-based UPFC fitted with a modified Venturini high-frequency pulse width modulator is first used to design the linear controllers for the transmission line active (P) and reactive (Q) powers. In order to minimize the resulting cross coupling between P and Q power controllers, decoupled linear controllers (DLC) are synthesized using inverse dynamics linearization. DPC are then developed using sliding-mode control techniques, in order to guarantee both robustness and decoupled control. The designed P and Q power controllers are compared using simulations and experimental results. Linear controllers show acceptable steady-state behaviour but still exhibit coupling between P and Q powers in transient operation. DLC are free from cross coupling but are parameter sensitive. Results obtained by DPC show decoupled power control with zero error tracking and faster responses with no overshoot and no steady-state error. All the designed controllers were implemented using the same digital signal processing hardware.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The activity of growing living bacteria was investigated using real-time and in situ rheology-in stationary and oscillatory shear. Two different strains of the human pathogen Staphylococcus aureus-strain COL and its isogenic cell wall autolysis mutant, RUSAL9-were considered in this work. For low bacteria density, strain COL forms small clusters, while the mutant, presenting deficient cell separation, forms irregular larger aggregates. In the early stages of growth, when subjected to a stationary shear, the viscosity of the cultures of both strains increases with the population of cells. As the bacteria reach the exponential phase of growth, the viscosity of the cultures of the two strains follows different and rich behaviors, with no counterpart in the optical density or in the population's colony-forming units measurements. While the viscosity of strain COL culture keeps increasing during the exponential phase and returns close to its initial value for the late phase of growth, where the population stabilizes, the viscosity of the mutant strain culture decreases steeply, still in the exponential phase, remains constant for some time, and increases again, reaching a constant plateau at a maximum value for the late phase of growth. These complex viscoelastic behaviors, which were observed to be shear-stress-dependent, are a consequence of two coupled effects: the cell density continuous increase and its changing interacting properties. The viscous and elastic moduli of strain COL culture, obtained with oscillatory shear, exhibit power-law behaviors whose exponents are dependent on the bacteria growth stage. The viscous and elastic moduli of the mutant culture have complex behaviors, emerging from the different relaxation times that are associated with the large molecules of the medium and the self-organized structures of bacteria. Nevertheless, these behaviors reflect the bacteria growth stage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the two-Higgs-doublet model as a framework in which to evaluate the viability of scenarios in which the sign of the coupling of the observed Higgs boson to down-type fermions (in particular, b-quark pairs) is opposite to that of the Standard Model (SM), while at the same time all other tree-level couplings are close to the SM values. We show that, whereas such a scenario is consistent with current LHC observations, both future running at the LHC and a future e(+)e(-) linear collider could determine the sign of the Higgs coupling to b-quark pairs. Discrimination is possible for two reasons. First, the interference between the b-quark and the t-quark loop contributions to the ggh coupling changes sign. Second, the charged-Higgs loop contribution to the gamma gamma h coupling is large and fairly constant up to the largest charged-Higgs mass allowed by tree-level unitarity bounds when the b-quark Yukawa coupling has the opposite sign from that of the SM (the change in sign of the interference terms between the b-quark loop and the W and t loops having negligible impact).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Eletrotécnica Ramo de Automação e eletrónica Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.