470 resultados para Acoplamento bilinear
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.
Resumo:
Tese de doutoramento, Química (Química Física), Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
Over the full visual field, contrast sensitivity is fairly well described by a linear decline in log sensitivity as a function of eccentricity (expressed in grating cycles). However, many psychophysical studies of spatial visual function concentrate on the central ±4.5 deg (or so) of the visual field. As the details of the variation in sensitivity have not been well documented in this region we did so for small patches of target contrast at several spatial frequencies (0.7–4 c/deg), meridians (horizontal, vertical, and oblique), orientations (horizontal, vertical, and oblique), and eccentricities (0–18 cycles). To reduce the potential effects of stimulus uncertainty, circular markers surrounded the targets. Our analysis shows that the decline in binocular log sensitivity within the central visual field is bilinear: The initial decline is steep, whereas the later decline is shallow and much closer to the classical results. The bilinear decline was approximately symmetrical in the horizontal meridian and declined most steeply in the superior visual field. Further analyses showed our results to be scale-invariant and that this property could not be predicted from cone densities. We used the results from the cardinal meridians to radially interpolate an attenuation surface with the shape of a witch's hat that provided good predictions for the results from the oblique meridians. The witch's hat provides a convenient starting point from which to build models of contrast sensitivity, including those designed to investigate signal summation and neuronal convergence of the image contrast signal. Finally, we provide Matlab code for constructing the witch's hat.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
We deal with a class of elliptic eigenvalue problems (EVPs) on a rectangle Ω ⊂ R^2 , with periodic or semi–periodic boundary conditions (BCs) on ∂Ω. First, for both types of EVPs, we pass to a proper variational formulation which is shown to fit into the general framework of abstract EVPs for symmetric, bounded, strongly coercive bilinear forms in Hilbert spaces, see, e.g., [13, §6.2]. Next, we consider finite element methods (FEMs) without and with numerical quadrature. The aim of the paper is to show that well–known error estimates, established for the finite element approximation of elliptic EVPs with classical BCs, hold for the present types of EVPs too. Some attention is also paid to the computational aspects of the resulting algebraic EVP. Finally, the analysis is illustrated by two non-trivial numerical examples, the exact eigenpairs of which can be determined.
Resumo:
* Work is partially supported by the Lithuanian State Science and Studies Foundation.
Resumo:
AMS subject classification: 90C31, 90A09, 49K15, 49L20.
Resumo:
Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.
Resumo:
2000 Mathematics Subject Classification: 15A69, 15A78.
Resumo:
Purpose: To to evaluate the benefit of bilinear and linear fitting to characterize the retinal vessel dilation to flicker light stimulation for the purpose of risk stratification in cardiovascular disease. Methods: Forty-five patients (15 with coronary artery disease (CAD), 15 with Diabetes Mellitus (DM) and 15 with CAD and DM) all underwent contact tonometry, digital blood pressure measurement, fundus photography, retinal vessel oximetry, static retinal vessel analysis and continous retinal diameter assessment using the retinal vessel analyser (and flicker light provocation). In addition we measured blood glucose (HbA1c) and keratinin levels in DM patients. Results: With increased severity of cardiovascular disease a more linear reaction profile of retinal arteriolar diameter to flicker light provocation can be observed. Conclusion: Absolute values of vessel dilation provide only limited information on the state of retinal arteriolar dilatory response to flicker light. The approach of bilinear fitting takes into account the immediate response to flicker light provocation as well as the maintained dilatory capacity during prolonged stimulation. Individuals with cardiovascular disease however show a largely linear reaction profile indicating an impairment of the initial rapid dilatory response as usually observed in healty individuals
Resumo:
Research on the consumer behavior of the Hispanic population has recently attracted the attention of marketing practitioners as well as researchers. This study's purpose was to develop a model and scales to examine the acculturation process of Hispanic consumers with income levels of $35,000 and above, and its effects on their consumer behavior. The proposed model defined acculturation as a bilinear and multidimensional change process, measuring consumers' selective change process in four dimensions: language preference, Hispanic identification, American identification, and familism. A national sample of 653 consumers was analyzed. The scales developed for testing the model showed good to high internal consistency and adequate concurrent validity. According to the results, consumers' contact with Hispanic and Anglo acculturation agents generates change or reinforces consumers' language preferences. Language preference fully mediates the effects of the agents on consumers' American identification and familism; however, the effects of the acculturation agents on Hispanic identification are only partially mediated by individuals' language preference change. It was proposed that the acculturation process would have an effect on consumers' brand loyalty, attitudes towards high quality and prestigious brands, purchase frequency, and savings allocation for their children. Given the lack of significant differences between Hispanic and Anglo consumers and among Hispanic generations, only savings allocation for children's future was studied intensively. According to these results, Hispanic consumers' savings for their children is affected by consumers' language preference through their ethnic identification and familism. No moderating effects were found for consumers' gender, age, and country of origin, suggesting that individual differences do not affect consumers' acculturation process. Additionally, the effects of familism were tested among ethnic groups. The results suggest not only that familism discriminates among Hispanic and Anglo consumers, but also is a significant predictor of consumers' brand loyalty, brand quality attitudes, and savings allocation. Three acculturation segments were obtained through cluster analysis: bicultural, high acculturation, and low acculturation groups, supporting the biculturalism proposition. ^
Resumo:
Research on the consumer behavior of the Hispanic population has recently attracted the attention of marketing practitioners as well as researchers. This study's purpose was to develop a model and scales to examine the acculturation process of Hispanic consumers with income levels of $35,000 and above, and its effects on their consumer behavior. The proposed model defined acculturation as a bilinear and multidimensional change process, measuring consumers' selective change process in four dimensions: language preference, Hispanic identification, American identification, and familism. A national sample of 653 consumers was analyzed. The scales developed for testing the model showed good to high internal consistency and adequate concurrent validity. According to the results, consumers' contact with Hispanic and Anglo acculturation agents generates change or reinforces consumers' language preferences. Language preference fully mediates the effects of the agents on consumers' American identification and familism; however, the effects of the acculturation agents on Hispanic identification are only partially mediated by individuals' language preference change. It was proposed that the acculturation process would have an effect on consumers' brand loyalty, attitudes towards high quality and prestigious brands, purchase frequency, and savings allocation for their children. Given the lack of significant differences between Hispanic and Anglo consumers and among Hispanic generations, only savings allocation for children's future was studied intensively. According to these results, Hispanic consumers' savings for their children is affected by consumers' language preference through their ethnic identification and familism. No moderating effects were found for consumers' gender, age, and country of origin, suggesting that individual differences do not affect consumers' acculturation process. Additionally, the effects of familism were tested among ethnic groups. The results suggest not only that familism discriminates among Hispanic and Anglo consumers, but also is a significant predictor of consumers' brand loyalty, brand quality attitudes, and savings allocation. Three acculturation segments were obtained through cluster analysis: bicultural, high acculturation, and low acculturation groups, supporting the biculturalism proposition.
Resumo:
The humanity reached a time of unprecedented technological development. Science has achieved and continues to achieve technologies that allowed increasingly to understand the universe and the laws which govern it, and also try to coexist without destroying the planet we live on. One of the main challenges of the XXI century is to seek and increase new sources of clean energy, renewable and able to sustain our growth and lifestyle. It is the duty of every researcher engage and contribute in this race of energy. In this context, wind power presents itself as one of the great promises for the future of electricity generation . Despite being a bit older than other sources of renewable energy, wind power still presents a wide field for improvement. The development of new techniques for control of the generator along with the development of research laboratories specializing in wind generation are one of the key points to improve the performance, efficiency and reliability of the system. Appropriate control of back-to-back converter scheme allows wind turbines based on the doubly-fed induction generator to operate in the variable-speed mode, whose benefits include maximum power extraction, reactive power injection and mechanical stress reduction. The generator-side converter provides control of active and reactive power injected into the grid, whereas the grid-side converter provides control of the DC link voltage and bi-directional power flow. The conventional control structure uses PI controllers with feed-forward compensation of cross-coupling dq terms. This control technique is sensitive to model uncertainties and the compensation of dynamic dq terms results on a competing control strategy. Therefore, to overcome these problems, it is proposed in this thesis a robust internal model based state-feedback control structure in order to eliminate the cross-coupling terms and thereby improve the generator drive as well as its dynamic behavior during sudden changes in wind speed. It is compared the conventional control approach with the proposed control technique for DFIG wind turbine control under both steady and gust wind conditions. Moreover, it is also proposed in this thesis an wind turbine emulator, which was developed to recreate in laboratory a realistic condition and to submit the generator to several wind speed conditions.