887 resultados para Nonrandom two-liquid model
Resumo:
A volume-averaged two-phase model addressing the main transport phenomena associated with hot tearing in an isotropic mushy zone during solidification of metallic alloys has recently been presented elsewhere along with a new hot tearing criterion addressing both inadequate melt feeding and excessive deformation at relatively high solid fractions. The viscoplastic deformation in the mushy zone is addressed by a model in which the coherent mush is considered as a porous medium saturated with liquid. The thermal straining of the mush is accounted for by a recently developed model taking into account that there is no thermal strain in the mushy zone at low solid fractions because the dendrites then are free to move in the liquid, and that the thermal strain in the mushy zone tends toward the thermal strain in the fully solidified material when the solid fraction tends toward one. In the present work, the authors determined how variations in the parameters of the constitutive equation for thermal strain influence the hot tearing susceptibility calculated by the criterion. It turns out that varying the parameters in this equation has a significant effect on both liquid pressure drop and viscoplastic strain, which are key parameters in the hot tearing criterion. However, changing the parameters in this constitutive equation will result in changes in the viscoplastic strain and the liquid pressure drop that have opposite effects on the hot tearing susceptibility. The net effect on the hot tearing susceptibility is thus small.
Resumo:
As ações de maior liquidez do índice IBOVESPA, refletem o comportamento das ações de um modo geral, bem como a relação das variáveis macroeconômicas em seu comportamento e estão entre as mais negociadas no mercado de capitais brasileiro. Desta forma, pode-se entender que há reflexos de fatores que impactam as empresas de maior liquidez que definem o comportamento das variáveis macroeconômicas e que o inverso também é uma verdade, oscilações nos fatores macroeconômicos também afetam as ações de maior liquidez, como IPCA, PIB, SELIC e Taxa de Câmbio. O estudo propõe uma análise da relação existente entre variáveis macroeconômicas e o comportamento das ações de maior liquidez do índice IBOVESPA, corroborando com estudos que buscam entender a influência de fatores macroeconômicos sobre o preço de ações e contribuindo empiricamente com a formação de portfólios de investimento. O trabalho abrangeu o período de 2008 a 2014. Os resultados concluíram que a formação de carteiras, visando a proteção do capital investido, deve conter ativos com correlação negativa em relação às variáveis estudadas, o que torna possível a composição de uma carteira com risco reduzido.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]
Resumo:
Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).
Resumo:
Objective: To describe the effect of age and body size on enantiomer selective pharmacokinetic (PK) of intravenous ketorolac in children using a microanalytical assay. Methods: Blood samples were obtained at 0, 15 and 30 min and at 1, 2, 4, 6, 8 and 12 h after a weight-dependent dose of ketorolac. Enantiomer concentration was measured using a liquid chromatography tandem mass spectrometry method. Non-linear mixed-effect modelling was used to assess PK parameters. Key findings: Data from 11 children (1.7–15.6 years, weight 10.7–67.4 kg) were best described by a two-compartment model for R(+), S(−) and racemic ketorolac. Only weight (WT) significantly improved the goodness of fit. The final population models were CL = 1.5 × (WT/46)0.75, V1 = 8.2 × (WT/46), Q = 3.4 × (WT/46)0.75, V2 = 7.9 × (WT/46), CL = 2.98 × (WT/46), V1 = 13.2 × (WT/46), Q = 2.8 × (WT/46)0.75, V2 = 51.5 × (WT/46), and CL = 1.1 × (WT/46)0.75, V1 = 4.9 × (WT/46), Q = 1.7 × (WT/46)0.75 and V2 = 6.3 × (WT/46)for R(+), S(−) and racemic ketorolac. Conclusions: Only body weight influenced the PK parameters for R(+) and S(−) ketorolac. Using allometric size scaling significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).
Resumo:
We are in an era of unprecedented data volumes generated from observations and model simulations. This is particularly true from satellite Earth Observations (EO) and global scale oceanographic models. This presents us with an opportunity to evaluate large scale oceanographic model outputs using EO data. Previous work on model skill evaluation has led to a plethora of metrics. The paper defines two new model skill evaluation metrics. The metrics are based on the theory of universal multifractals and their purpose is to measure the structural similarity between the model predictions and the EO data. The two metrics have the following advantages over the standard techniques: a) they are scale-free, b) they carry important part of information about how model represents different oceanographic drivers. Those two metrics are then used in the paper to evaluate the performance of the FVCOM model in the shelf seas around the south-west coast of the UK.
Resumo:
We are in an era of unprecedented data volumes generated from observations and model simulations. This is particularly true from satellite Earth Observations (EO) and global scale oceanographic models. This presents us with an opportunity to evaluate large scale oceanographic model outputs using EO data. Previous work on model skill evaluation has led to a plethora of metrics. The paper defines two new model skill evaluation metrics. The metrics are based on the theory of universal multifractals and their purpose is to measure the structural similarity between the model predictions and the EO data. The two metrics have the following advantages over the standard techniques: a) they are scale-free, b) they carry important part of information about how model represents different oceanographic drivers. Those two metrics are then used in the paper to evaluate the performance of the FVCOM model in the shelf seas around the south-west coast of the UK.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
[EN] Therefore the understanding and proper evaluation of the flow and mixing behaviour at microscale becomes a very important issue. In this study, the diffusion behaviour of two reacting solutions of HCI and NaOH were directly observed in a glass/polydimethylsiloxane microfluidic device using adaptive coatings based on the conductive polymer polyaniline that are covalently attached to the microchannel walls. The two liquid streams were combined at the junction of a Y-shaped microchannel, and allowed to diffuse into each other and react. The results showed excellent correlation between optical observation of the diffusion process and the numerical results. A numerical model which is based on finite volume method (FVM) discretisation of steady Navier-Stokes (fluid flow) equations and mass transport equations without reactions was used to calculate the flow variables at discrete points in the finite volume mesh element. The high correlation between theory and practical data indicates the potential of such coatings to monitor diffusion processes and mixing behaviour inside microfluidic channels in a dye free environment.
Resumo:
In this work, the liquid-liquid and solid-liquid phase behaviour of ten aqueous pseudo-binary and three binary systems containing polyethylene glycol (PEG) 2050, polyethylene glycol 35000, aniline, N,N-dimethylaniline and water, in the temperature range 298.15-350.15 K and at ambient pressure of 0.1 MPa, was studied. The obtained temperature-composition phase diagrams showed that the only functional co-solvent was PEG2050 for aniline in water, while PEG35000 even showed a clear anti-solvent effect in the N,N-dimethylaniline aqueous system. The experimental solid-liquid equilibria (SLE) data have been correlated by the non-random two-liquid (NRTL) model, and the correlation results are in accordance with the experimental results.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
People have adopted various formats of media such as graphics, photo and text (nickname) in order to represent themselves when communicate with others online. Avatar is known as a visual form representing a user oneself and one's identity wished. Its form can vary from a two-dimensional model to a three-dimensional model, and can be visualised with various visual forms and styles. In general, two-dimensional images including an animated image are used in online forum communities and live chat software; while three-dimensional models are often used in computer games. Avatar design is often regarded as a graphic designer's visual image creation or a user's output based on one's personal preference, yet it often causes the avatar design having no consideration of its practical visual design and users' interactive communication experience aspects. This paper will review various types and styles of avatar and discuss about avatar design from visual design and online user experience perspectives. It aims to raise a design discourse in avatar design and build up a well-articulated set of design principles for effective avatar design.
Resumo:
The melting of spherical nanoparticles is considered from the perspective of heat flow in a pure material and as a moving boundary (Stefan) problem. The dependence of the melting temperature on both the size of the particle and the interfacial tension is described by the Gibbs-Thomson effect, and the resulting two-phase model is solved numerically using a front-fixing method. Results show that interfacial tension increases the speed of the melting process, and furthermore, the temperature distribution within the solid core of the particle exhibits behaviour that is qualitatively different to that predicted by the classical models without interfacial tension.
Resumo:
Lamb waves propagation in composite materials has been studied extensively since it was first observed in 1982. In this paper, we show a procedure to simulate the propagation of Lamb waves in composite laminates using a two-dimensional model in ANSYS. This is done by simulating the Lamb waves propagating along the plane of the structure in the form of a time dependent force excitation. In this paper, an 8-layered carbon reinforced fibre plastic (CRFP) is modelled as transversely isotropic and dissipative medium and the effect of flaws is analyzed with respect to the defects induced between various layers of the composite laminate. This effort is the basis for the future development of a 3D model for similar applications.