929 resultados para Function Model
Resumo:
Provoked vestibulodynia (PVD) is a prevalent women’s sexual pain disorder, which is associated with sexual function difficulties. Attachment theory has been used to understand adult sexual outcomes, providing a useful framework for examining sexual adaptation in couples confronted with PVD. Research to date indicates that anxious and avoidant attachment dimensions correlate with worse sexual outcomes in community and clinical samples. The present study examined the association between attachment, pain, sexual function and sexual satisfaction in a sample of 101 couples in which the women presented with PVD. The Actor-Partner Interdependence Model was used in order to investigate both actor and partner effects. This study also examined the role of sexual assertiveness as a mediator of these associations via structural equation modeling. Women completed measures of pain intensity and both members of the couple completed measures of romantic attachment, sexual assertiveness, sexual function and satisfaction. Results indicated that attachment dimensions did not predict pain intensity. Both anxious and avoidant attachment were associated with lower sexual satisfaction. Only attachment avoidance predicted lower sexual function in women. Partner effects indicated that higher sexual assertiveness in men predicted better sexual function in women, while higher sexual assertiveness in women predicted higher sexual satisfaction in men. Finally, women’s sexual assertiveness was found to be a significant mediator of the relationship between their attachment dimensions, sexual function and satisfaction. Findings highlight the importance of examining how anxious and avoidant attachment may lead to difficulties in sexual assertiveness and to less satisfying sexual interactions in couples where women suffer from PVD.
Resumo:
Note de l'éditeur : This article may not exactly replicate the final version published in the APA journal. It is not the copy of record. / Cet article ne constitue pas la version officielle, et peut différer de la version publiée dans la revue.
Resumo:
L'athérosclérose est une maladie inflammatoire chronique caractérisée par l'accumulation de cholestérol dans la paroi artérielle et associée à une réponse immunitaire anormale dans laquelle les macrophages jouent un rôle important. Récemment, il a été démontré que les vaisseaux lymphatiques jouent un rôle primordial dans le transport inverse du cholestérol (Martel et al. JCI 2013). L’objectif global de mon stage de maîtrise a été de mieux caractériser la dysfonction lymphatique associée à l’athérosclérose, en étudiant de plus près l’origine physiologique et temporelle de ce mauvais fonctionnement. Notre approche a été d’étudier, depuis l’initiation de l’athérosclérose jusqu’à la progression d’une lésion athérosclérotique tardive, la physiologie des deux constituants principaux qui forment les vaisseaux lymphatiques : les capillaires et collecteurs lymphatiques. En utilisant comme modèle principal des souris Ldlr-/-; hApoB100+/+, nous avons pu démontrer que la dysfonction lymphatique est présente avant même l’apparition de l’athérosclérose, et que cette dysfonction est principalement associée avec un défaut au niveau des vaisseaux collecteurs, limitant ainsi le transport de la lymphe des tissus périphériques vers le sang. De plus, nous avons démontré pour la première fois l’expression du récepteur au LDL par les cellules endothéliales lymphatiques. Nos travaux subséquents démontrent que ce défaut de propulsion de la lymphe pourrait être attribuable à l’absence du récepteur au LDL, et que la dysfonction lymphatique observée précocement dans l’athérosclérose peut être limitée par des injections systémiques de VEGF (vascular endothelial growth factor) –C. Ces résultats suggèrent que la caractérisation fonctionnelle de la capacité de pompage des vaisseaux collecteurs serait une condition préalable à la compréhension de l'interaction entre la fonction du système lymphatique et la progression de l'athérosclérose. Ultimement, nos travaux nous ont amené à considérer de nouvelles cibles thérapeutiques potentielles dans la prévention et le traitement de l’athérosclérose.
Resumo:
Dopamine D2 receptors are involved in ethanol self- administration behavior and also suggested to mediate the onset and offset of ethanol drinking. In the present study, we investigated dopamine (DA) content and Dopamine D2 (DA D2) receptors in the hypothalamus and corpus striatum of ethanol treated rats and aldehyde dehydrogenase (ALDH) activity in the liver and plasma of ethanol treated rats and in vitro hepatocyte cultures. Hypothalamic and corpus striatal DA content decreased significantly (P\0.05, P\0.001 respectively) and homovanillic acid/ dopamine (HVA/DA) ratio increased significantly (P\0.001) in ethanol treated rats when compared to control. Scatchard analysis of [3H] YM-09151-2 binding to DA D2 receptors in hypothalamus showed a significant increase (P\0.001) in Bmax without any change in Kd in ethanol treated rats compared to control. The Kd of DA D2 receptors significantly decreased (P\0.05) in the corpus striatum of ethanol treated rats when compared to control. DA D2 receptor affinity in the hypothalamus and corpus striatum of control and ethanol treated rats fitted to a single site model with unity as Hill slope value. The in vitro studies on hepatocyte cultures showed that 10-5 M and 10-7 M DA can reverse the increased ALDH activity in 10% ethanol treated cells to near control level. Sulpiride, an antagonist of DA D2, reversed the effect of dopamine on 10% ethanol induced ALDH activity in hepatocytes. Our results showed a decreased dopamine concentration with enhanced DA D2 receptors in the hypothalamus and corpus striatum of ethanol treated rats. Also, increased ALDH was observed in the plasma and liver of ethanol treated rats and in vitro hepatocyte cultures with 10% ethanol as a compensatory mechanism for increased aldehyde production due to increased dopamine metabolism. A decrease in dopamine concentration in major brain regions is coupled with an increase in ALDH activity in liver and plasma, which contributes to the tendency for alcoholism. Since the administration of 10-5 M and 10-7 M DA can reverse the increased ALDH activity in ethanol treated cells to near control level, this has therapeutic application to correct ethanol addicts from addiction due to allergic reaction observed in aldehyde accumulation.
Resumo:
Electron-phonon interaction is considered within the framework of the fluctuating valence of Cu atoms. Anderson's lattice Hamiltonian is suitably modified to take this into account. Using Green's function technique tbe possible quasiparticle excitations' are determined. The quantity 2delta k(O)/ kB Tc is calculated for Tc= 40 K. The calculated values are in good agreement with the experimental results.
Resumo:
This thesis presents the methodology of linking Total Productive Maintenance (TPM) and Quality Function Deployment (QFD). The Synergic power ofTPM and QFD led to the formation of a new maintenance model named Maintenance Quality Function Deployment (MQFD). This model was found so powerful that, it could overcome the drawbacks of TPM, by taking care of customer voices. Those voices of customers are used to develop the house of quality. The outputs of house of quality, which are in the form of technical languages, are submitted to the top management for making strategic decisions. The technical languages, which are concerned with enhancing maintenance quality, are strategically directed by the top management towards their adoption of eight TPM pillars. The TPM characteristics developed through the development of eight pillars are fed into the production system, where their implementation is focused towards increasing the values of the maintenance quality parameters, namely overall equipment efficiency (GEE), mean time between failures (MTBF), mean time to repair (MTIR), performance quality, availability and mean down time (MDT). The outputs from production system are required to be reflected in the form of business values namely improved maintenance quality, increased profit, upgraded core competence, and enhanced goodwill. A unique feature of the MQFD model is that it is not necessary to change or dismantle the existing process ofdeveloping house ofquality and TPM projects, which may already be under practice in the company concerned. Thus, the MQFD model enables the tactical marriage between QFD and TPM.First, the literature was reviewed. The results of this review indicated that no activities had so far been reported on integrating QFD in TPM and vice versa. During the second phase, a survey was conducted in six companies in which TPM had been implemented. The objective of this survey was to locate any traces of QFD implementation in TPM programme being implemented in these companies. This survey results indicated that no effort on integrating QFD in TPM had been made in these companies. After completing these two phases of activities, the MQFD model was designed. The details of this work are presented in this research work. Followed by this, the explorative studies on implementing this MQFD model in real time environments were conducted. In addition to that, an empirical study was carried out to examine the receptivity of MQFD model among the practitioners and multifarious organizational cultures. Finally, a sensitivity analysis was conducted to find the hierarchy of various factors influencing MQFD in a company. Throughout the research work, the theory and practice of MQFD were juxtaposed by presenting and publishing papers among scholarly communities and conducting case studies in real time scenario.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
We study the analytical solution of the Monte Carlo dynamics in the spherical Sherrington-Kirkpatrick model using the technique of the generating function. Explicit solutions for one-time observables (like the energy) and two-time observables (like the correlation and response function) are obtained. We show that the crucial quantity which governs the dynamics is the acceptance rate. At zero temperature, an adiabatic approximation reveals that the relaxational behavior of the model corresponds to that of a single harmonic oscillator with an effective renormalized mass.
Resumo:
The magnetic coupling constant of selected cuprate superconductor parent compounds has been determined by means of embedded cluster model and periodic calculations carried out at the same level of theory. The agreement between both approaches validates the cluster model. This model is subsequently employed in state-of-the-art configuration interaction calculations aimed to obtain accurate values of the magnetic coupling constant and hopping integral for a series of superconducting cuprates. Likewise, a systematic study of the performance of different ab initio explicitly correlated wave function methods and of several density functional approaches is presented. The accurate determination of the parameters of the t-J Hamiltonian has several consequences. First, it suggests that the appearance of high-Tc superconductivity in existing monolayered cuprates occurs with J/t in the 0.20¿0.35 regime. Second, J/t=0.20 is predicted to be the threshold for the existence of superconductivity and, third, a simple and accurate relationship between the critical temperatures at optimum doping and these parameters is found. However, this quantitative electronic structure versus Tc relationship is only found when both J and t are obtained at the most accurate level of theory.
Resumo:
The present thesis deals with the theoretical investigations on the effect of anisotropy on various properties of magnetically doped superconductors described by fihiba — Rusinov model.Chapter 1 is introductory. It contains a brief account of the current status of theory of superconductivity. In’ chapter 2 we give the formulation of the problem. Chapter 2.1 gives the BCS theory. The effect of magnetic impurities in superconductors as described by A8 theory is given in chapter 2.2A and that described by SR model is discussed in chapter 2.28. Chapter 2.2c deals with Kondo effect. In chapter 2.3 the anisotropy problem is reviewed. Our calculations, results and discussions are given in chapter 3. Chapter 3.1 deals with Josephson tunnel effect. In chapter 3.2 the thermodynamic critical field H62 is described. Chtpter 3.3 deals with the density of states. The ultrasonic attenuation coefficient and ufitlear spin relaxation are given in chapter 3.4 and 3.5 respectively. In chapter 3.6 we give the upper critical field calculations and chapter 3.7 deals with the response function. The Kondo effect is given in chapter 3.8. In chapter 4 we give the sumary of our results
Resumo:
The adult mammalian liver is predominantly in a quiescent state with respect to cell division. This quiescent state changes dramatically, however, if the liver is injured by toxic, infectious or mechanic agents (Ponder, 1996). Partial hepatectomy (PH) which consists of surgical removal of two-thirds of the liver, has been used to stimulate hepatocyte proliferation (Higgins & Anderson 1931). This experimental model of liver regeneration has been the target of many studies to probe the mechanisms responsible for liver cell growth control (Michalopoulos, 1990; Taub, 1996). After PH most of the remaining cells in the renmant liver respond with co-ordinated waves of DNA synthesis and divide in a process called compensatory hyperplasia. Hence, liver regeneration is a model of relatively synchronous cell cycle progression in vivo. In contrast to hepatomas, cell division is terminated under some intrinsic control when the original cellular mass has been regained. This has made liver regeneration a useful model to dissect the biochemical and molecular mechanisms of cell division regulation. The liver is thus, one of the few adult organs that demonstrates a physiological growth rewonse (Fausto & Mead, 1989; Fausto & Webber, 1994). The regulation of liver cell proliferation involves circulating or intrahepatic factors that are involved in either the priming of hepatocytes to enter the cell cycle (Go to G1) or progression through the cell cycle. In order to understand the basis of liver regeneration it is mandatory to define the mechanisms which (a) trigger division, (b) allow the liver to concurrently grow and maintain dilferentiated fimction and (c) terminate cell proliferation once the liver has reached the appropriate mass. Studies on these aspects of liver regeneration will provide basic insight of cell growth and dilferentiation, liver diseases like viral hepatitis, toxic damage and liver transplant where regeneration of the liver is essential. In the present study, Go/G1/S transition of hepatocytes re-entering the cell cycle after PH was studied with special emphasis on the involvement of neurotransmitters, their receptors and second messenger function in the control of cell division during liver regeneration
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
This research quantitatively evaluates the water retention capacity and flood control function of the forest catchments by using hydrological data of the large flood events which happened after the serious droughts. The objective sites are the Oodo Dam and the Sameura Dam catchments in Japan. The kinematic wave model, which considers saturated and unsaturated sub-surface soil zones, is used for the rainfall-runoff analysis. The result shows that possible storage volume of the Oodo Dam catchment is 162.26 MCM in 2005, while that of Samerua is 102.83 MCM in 2005 and 102.64 MCM in 2007. Flood control function of the Oodo Dam catchment is 173 mm in water depth in 2005, while the Sameura Dam catchment 114 mm in 2005 and 126 mm in 2007. This indicates that the Oodo Dam catchment has more than twice as big water capacity as its capacity (78.4 mm), while the Sameura Dam catchment has about one-fifth of the its storage capacity (693 mm).
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.