16 resultados para Continuum mixture theory
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
We have generalized earlier work on anchoring of nematic liquid crystals by Sullivan, and Sluckin and Poniewierski, in order to study transitions which may occur in binary mixtures of nematic liquid crystals as a function of composition. Microscopic expressions have been obtained for the anchoring energy of (i) a liquid crystal in contact with a solid aligning surface; (ii) a liquid crystal in contact with an immiscible isotropic medium; (iii) a liquid crystal mixture in contact with a solid aligning surface. For (iii), possible phase diagrams of anchoring angle versus dopant concentration have been calculated using a simple liquid crystal model. These exhibit some interesting features including re-entrant conical anchoring, for what are believed to be realistic values of the molecular parameters. A way of relaxing the most drastic approximation implicit in the above approach is also briefly discussed.
Resumo:
An experimental and theoretical study of the electro-rheological effects observed in the nematic phase of 4-n-heptyl-4'-cyanobiphenyl has been conducted. This liquid crystal appears to be a model system, in which the observed rheological behaviour can be interpreted by the Leslie-Ericksen continuum theory for low molecular weight liquid crystals. Flow curves are illustrated at different temperatures and under the influence of an external electric field ranging from 0 to 3 kV mm-1, applied perpendicular to the direction of flow. Also presented is the apparent viscosity as a function of temperature, over similar values of electric field, obtained at different shear rates. A master flow curve has been constructed for each temperature by dividing the shear rate by the square of the electric field and multiplying by the square of a reference value of electric field. In a log-log plot, two Newtonian plateaux are found to appear at low and high shear rates, connected by a shear-thinning region. We have applied the Leslie-Ericksen continuum theory, in which the director alignment angle is a function of the electric field and the flow field boundary conditions are neglected, to determine viscoelastic parameters and the dielectric anisotropy.
Resumo:
This paper presents the results from an experimental study of the technical viability of two mixture designs for self-consolidating concrete (SCC) proposed by two Portuguese researchers in a previous work. The objective was to find the best method to provide the required characteristics of SCC in fresh and hardened states without having to experiment with a large number of mixtures. Five SCC mixtures, each with a volume of 25 L (6.61 gal.) were prepared using a forced mixer with a vertical axis for each of three compressive strength targets: 40, 55, and 70 MPa (5.80, 7.98, and 10.15 ksi). The mixtures' fresh state properties of fluidity, segregation resistance ability, and bleeding and blockage tendency, and their hardened state property of compressive strength were compared. For this study, the following tests were performed. slump-flow, V-funnel, L-box, box, and compressive strength. The results of this study made it possible to identify the most influential factors in the design of the SCC mixtures.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
The development of children's school achievements in mathematics is one of the most important aims of education in Poland. The results of research concerning monitoring of school achievements in maths is not optimistic. We can observe low levels of children’s understanding of the merits of maths, self-developed strategies in solving problems and practical usage of maths skills. This article frames the discussion of this problem in its psychological and didactic context and analyses the causes as they relate to school practice in teaching maths
Resumo:
Proceedings of International Conference - SPIE 7477, Image and Signal Processing for Remote Sensing XV - 28 September 2009
Resumo:
We present a study of the effects of nanoconfinement on a system of hard Gaussian overlap particles interacting with planar substrates through the hard-needle-wall potential, extending earlier work by two of us [D. J. Cleaver and P. I. C. Teixeira, Chem. Phys. Lett. 338, 1 (2001)]. Here, we consider the case of hybrid films, where one of the substrates induces strongly homeotropic anchoring, while the other favors either weakly homeotropic or planar anchoring. These systems are investigated using both Monte Carlo simulation and density-functional theory, the latter implemented at the level of Onsager's second-virial approximation with Parsons-Lee rescaling. The orientational structure is found to change either continuously or discontinuously depending on substrate separation, in agreement with earlier predictions by others. The theory is seen to perform well in spite of its simplicity, predicting the positional and orientational structure seen in simulations even for small particle elongations.
Resumo:
We show that a self-generated set of combinatorial games, S. may not be hereditarily closed but, strong self-generation and hereditary closure are equivalent in the universe of short games. In [13], the question "Is there a set which will give a non-distributive but modular lattice?" appears. A useful necessary condition for the existence of a finite non-distributive modular L(S) is proved. We show the existence of S such that L(S) is modular and not distributive, exhibiting the first known example. More, we prove a Representation Theorem with Games that allows the generation of all finite lattices in game context. Finally, a computational tool for drawing lattices of games is presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Refractive indices, n(D), and densities, rho, at 298.15 K were measured for the ternary mixture methanol (MeOH)/propan-1-ol (1-PrOH)/acetonitrile (MeCN) for a total of 22 mole fractions, along with 18 mole fractions of each of the corresponding binary mixtures, methanol/propan-1-ol, propan-1-ol/acetonitrile and methanol/acetonitrile. The variation of excess refractive indices and excess molar volumes with composition was modeled by the Redlich-Kister polynomial function in the case of binary mixtures and by the Cibulka equation for the ternary mixture. A thermodynamic approach to excess refractive indices, recently proposed by other authors, was applied for the first time to ternary liquid mixtures. Structural effects were identified and interpreted both in the binary and ternary systems. A complex relationship between excess refractive indices and excess molar volumes was identified, revealing all four possible sign combinations between these two properties. Structuring of the mixtures was also discussed on the basis of partial molar volumes of the binary and ternary mixtures.
Resumo:
In this work we study the electro-rheological behaviour of a series of four liquid crystal (LC) cyanobiphenyls with a number of carbon atoms in the alkyl group, ranging from five to eight (5CB–8CB). We present the flow curves for different temperatures and under the influence of an external electric field, ranging from 0 to 3 kV/mm, and the viscosity as a function of the temperature, for the same values of electric field, obtained for different shear rates. Theoretical interpretation of the observed behaviours is proposed in the framework of the continuum theory of Leslie–Ericksen for low molecular weight nematic LCs. In our analysis, the director alignment angle is only a function of the ratio between the shear rate and the square of the electric field – boundary conditions are neglected. By fitting the theoretical model to the experimental data, we are able to determine some viscosity coefficients and the dielectric anisotropy as a function of temperature. To interpret the behaviour of the flow curves near the nematic–isotropic transitions, we apply the continuum theory of Olmsted–Goldbart, which extends the theory of Leslie–Ericksen to the case where the degree of alignment of the LC molecules can also vary.
Resumo:
We investigate the behavior of a patchy particle model close to a hard-wall via Monte Carlo simulation and density functional theory (DFT). Two DFT approaches, based on the homogeneous and inhomogeneous versions of Wertheim's first order perturbation theory for the association free energy are used. We evaluate, by simulation and theory, the equilibrium bulk phase diagram of the fluid and analyze the surface properties for two isochores, one of which is close to the liquid side of the gas-liquid coexistence curve. We find that the density profile near the wall crosses over from a typical high-temperature adsorption profile to a low-temperature desorption one, for the isochore close to coexistence. We relate this behavior to the properties of the bulk network liquid and find that the theoretical descriptions are reasonably accurate in this regime. At very low temperatures, however, an almost fully bonded network is formed, and the simulations reveal a second adsorption regime which is not captured by DFT. We trace this failure to the neglect of orientational correlations of the particles, which are found to exhibit surface induced orientational order in this regime.
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
We generalize Wertheim's first order perturbation theory to account for the effect in the thermodynamics of the self-assembly of rings characterized by two energy scales. The theory is applied to a lattice model of patchy particles and tested against Monte Carlo simulations on a fcc lattice. These particles have 2 patches of type A and 10 patches of type B, which may form bonds AA or AB that decrease the energy by epsilon(AA) and by epsilon(AB) = r epsilon(AA), respectively. The angle theta between the 2 A-patches on each particle is fixed at 601, 90 degrees or 120 degrees. For values of r below 1/2 and above a threshold r(th)(theta) the models exhibit a phase diagram with two critical points. Both theory and simulation predict that rth increases when theta decreases. We show that the mechanism that prevents phase separation for models with decreasing values of theta is related to the formation of loops containing AB bonds. Moreover, we show that by including the free energy of B-rings ( loops containing one AB bond), the theory describes the trends observed in the simulation results, but that for the lowest values of theta, the theoretical description deteriorates due to the increasing number of loops containing more than one AB bond.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.