31 resultados para Generalized Lévy Process
Resumo:
The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties. © 2014 RILEM
Resumo:
This paper introduces a new unsupervised hyperspectral unmixing method conceived to linear but highly mixed hyperspectral data sets, in which the simplex of minimum volume, usually estimated by the purely geometrically based algorithms, is far way from the true simplex associated with the endmembers. The proposed method, an extension of our previous studies, resorts to the statistical framework. The abundance fraction prior is a mixture of Dirichlet densities, thus automatically enforcing the constraints on the abundance fractions imposed by the acquisition process, namely, nonnegativity and sum-to-one. A cyclic minimization algorithm is developed where the following are observed: 1) The number of Dirichlet modes is inferred based on the minimum description length principle; 2) a generalized expectation maximization algorithm is derived to infer the model parameters; and 3) a sequence of augmented Lagrangian-based optimizations is used to compute the signatures of the endmembers. Experiments on simulated and real data are presented to show the effectiveness of the proposed algorithm in unmixing problems beyond the reach of the geometrically based state-of-the-art competitors.
Resumo:
An improved class of Boussinesq systems of an arbitrary order using a wave surface elevation and velocity potential formulation is derived. Dissipative effects and wave generation due to a time-dependent varying seabed are included. Thus, high-order source functions are considered. For the reduction of the system order and maintenance of some dispersive characteristics of the higher-order models, an extra O(mu 2n+2) term (n ??? N) is included in the velocity potential expansion. We introduce a nonlocal continuous/discontinuous Galerkin FEM with inner penalty terms to calculate the numerical solutions of the improved fourth-order models. The discretization of the spatial variables is made using continuous P2 Lagrange elements. A predictor-corrector scheme with an initialization given by an explicit RungeKutta method is also used for the time-variable integration. Moreover, a CFL-type condition is deduced for the linear problem with a constant bathymetry. To demonstrate the applicability of the model, we considered several test cases. Improved stability is achieved.
Resumo:
Wythoff Queens is a classical combinatorial game related to very interesting mathematical results. An amazing one is the fact that the P-positions are given by (⌊├ φn⌋┤┤,├ ├ ⌊φ┤^2 n⌋) and (⌊├ φ^2 n⌋┤┤,├ ├ ⌊φ┤n⌋) where φ=(1+√5)/2. In this paper, we analyze a different version where one player (Left) plays with a chess bishop and the other (Right) plays with a chess knight. The new game (call it Chessfights) lacks a Beatty sequence structure in the P-positions as in Wythoff Queens. However, it is possible to formulate and prove some general results of a general recursive law which is a particular case of a Partizan Subtraction game.
Resumo:
A dynamical approach to study the behaviour of generalized populational growth models from Bets(p, 2) densities, with strong Allee effect, is presented. The dynamical analysis of the respective unimodal maps is performed using symbolic dynamics techniques. The complexity of the correspondent discrete dynamical systems is measured in terms of topological entropy. Different populational dynamics regimes are obtained when the intrinsic growth rates are modified: extinction, bistability, chaotic semistability and essential extinction.
Resumo:
Our aim was to analyse the impact of the characteristics of words used in spelling programmes and the nature of instructional guidelines on the evolution from grapho-perceptive writing to phonetic writing in preschool children. The participants were 50 5-year-old children, divided in five equivalent groups in intelligence, phonological skills and spelling. All the children knew the vowels and the consonants B, D, P, R, T, V, F, M and C, but didn’t use them on spelling. Their spelling was evaluated in a pre and post-test with 36 words beginning with the consonants known. In-between they underwent a writing programme designed to lead them to use the letters P and T to represent the initial phonemes of words. The groups differed on the kind of words used on training (words whose initial syllable matches the name of the initial letter—Exp. G1 and Exp. G2—versus words whose initial syllable is similar to the sound of the initial letter—Exp. G3 and Exp. G4). They also differed on the instruction used in order to lead them to think about the relations between the initial phoneme of words and the initial consonant (instructions designed to make the children think about letter names—Exp. G1 and Exp. G3 —versus instructions designed to make the children think about letter sounds—Exp. G2 and Exp. G4). The 5th was a control group. All the children evolved to syllabic phonetisations spellings. There are no differences between groups at the number of total phonetisations but we found some differences between groups at the quality of the phonetisations.
Resumo:
The aim of this paper is to evaluate the influence of the crushing process used to obtain recycled concrete aggregates on the performance of concrete made with those aggregates. Two crushing methods were considered: primary crushing, using a jaw crusher, and primary plus secondary crushing (PSC), using a jaw crusher followed by a hammer mill. Besides natural aggregates (NA), these two processes were also used to crush three types of concrete made in laboratory (L20, L45 e L65) and three more others from the precast industry (P20, P45 e P65). The coarse natural aggregates were totally replaced by coarse recycled concrete aggregates. The recycled aggregates concrete mixes were compared with reference concrete mixes made using only NA, and the following properties related to the mechanical and durability performance were tested: compressive strength; splitting tensile strength; modulus of elasticity; carbonation resistance; chloride penetration resistance; water absorption by capillarity; water absorption by immersion; and shrinkage. The results show that the PSC process leads to better performances, especially in the durability properties.
Resumo:
This paper refers to the assessment on site by semi-destructive testing (SDT) methods of the consolidation efficiency of a conservation process developed by Henriques (2011) for structural and non-structural pine wood elements in service. This study was applied on scots pine wood (Pinus sylvestris L.) degraded by fungi after treatment with a biocidal product followed by consolidation with a polymeric product. This solution avoids substitutions of wood moderately degraded by fungi, improving its physical and mechanical characteristics. The consolidation efficiency was assessed on site by methods of drill resistance and penetration resistance. The SDT methods used showed good sensitivity to the conservation process and could evaluate their effectiveness. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.
Resumo:
Solution enthalpies of 18-crown-6 have been obtained for a set of 14 protic and aprotic solvents at 298.15 K. The complementary use of Solomonov's methodology and a QSPR-based approach allowed the identification of the most significant solvent descriptors that model the interaction enthalpy contribution of the solution process (Delta H-int(A/S)). Results were compared with data previously obtained for 1,4-dioxane. Although the interaction enthalpies of 18-crown-6 correlate well with those of 1,4-dioxane, the magnitude of the most relevant parameters, pi* and beta, is almost three times higher for 18-crown-6. This is rationalized in terms of the impact of the solute's volume in the solution processes of both compounds. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
An abstract theory on general synchronization of a system of several oscillators coupled by a medium is given. By generalized synchronization we mean the existence of an invariant manifold that allows a reduction in dimension. The case of a concrete system modeling the dynamics of a chemical solution on two containers connected to a third container is studied from the basics to arbitrary perturbations. Conditions under which synchronization occurs are given. Our theoretical results are complemented with a numerical study.
Resumo:
In recent papers, the authors obtained formulas for directional derivatives of all orders, of the immanant and of the m-th xi-symmetric tensor power of an operator and a matrix, when xi is a character of the full symmetric group. The operator norm of these derivatives was also calculated. In this paper, similar results are established for generalized matrix functions and for every symmetric tensor power.
Resumo:
This work intends to evaluate the (mechanical and durability) performance of concrete made with coarse recycled concrete aggregates (CRCA) obtained using two crushing processes: primary crushing (PC) and primary plus secondary crushing (PSC). This analysis intends to select the most efficient production process of recycled aggregates (RA). The RA used here resulted from precast products (P), with strength classes of 20 MPa, 45 MPa and 65 MPa, and from laboratory-made concrete (L) with the same compressive strengths. The evaluation of concrete was made with the following tests: compressive strength; splitting tensile strength; modulus of elasticity; carbona-tion resistance; chloride penetration resistance; capillary water absorption; and water absorption by immersion. These findings contribute to a solid and innovative basis that allows the precasting industry to use without restrictions the waste it generates. © (2015) Trans Tech Publications, Switzerland.
Resumo:
This paper introduces a new hyperspectral unmixing method called Dependent Component Analysis (DECA). This method decomposes a hyperspectral image into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA performance is illustrated using simulated and real data.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.