966 resultados para Pull-In Parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the influence of the siesta in ambulatory blood pressure (BP) monitoring and in cardiac structure parameters. METHODS: 1940 ambulatory arterial blood pressure monitoring tests were analyzed (Spacelabs 90207, 15/15 minutes from 7:00 to 22:00 hours and 20/20 minutes from 22:01 to 6.59hours) and 21% of the records indicated that the person had taken a siesta (263 woman, 52±14 years). The average duration of the siesta was 118±58 minutes. RESULTS: (average ± standard deviation) The average of systolic/diastolic pressures during wakefulness, including the napping period, was less than the average for the period not including the siesta (138±16/85±11 vs 139±16/86±11 mmHg, p<0.05); 2) pressure loads during wakefulness including the siesta, were less than those observed without the siesta); 3) the averages of nocturnal sleep blood pressures were similar to those of the siesta, 4) nocturnal sleep pressure drops were similar to those in the siesta including wakefulness with and without the siesta; 5) the averages of BP in men were higher (p<0.05) during wakefulness with and without the siesta, during the siesta and nocturnal sleep in relation to the average obtained in women; 6) patients with a reduction of 0- 5% during the siesta had thickening of the interventricular septum and a larger posterior wall than those with a reduction during the siesta >5%. CONCLUSION: The siesta influenced the heart structure parameters and from a statistical point of view the average of systolic and diastolic pressures and the respective pressure loads of the wakeful period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Myocardial perfusion gated-single photon emission computed tomography (gated-SPECT) imaging is used for the combined evaluation of myocardial perfusion and left ventricular (LV) function. The aim of this study is to analyze the influence of counts/pixel and concomitantly the total counts in the myocardium for the calculation of myocardial functional parameters. Material and methods: Gated-SPECT studies were performed using a Monte Carlo GATE simulation package and the NCAT phantom. The simulations of these studies use the radiopharmaceutical 99mTc-labeled tracers (250, 350, 450 and 680MBq) for standard patient types, effectively corresponding to the following activities of myocardium: 3, 4.2, 5.4-8.2MBq. All studies were simulated using 15 and 30s/projection. The simulated data were reconstructed and processed by quantitative-gated-SPECT software, and the analysis of functional parameters in gated-SPECT images was done by using Bland-Altman test and Mann-Whitney-Wilcoxon test. Results: In studies simulated using different times (15 and 30s/projection), it was noted that for the activities for full body: 250 and 350MBq, there were statistically significant differences in parameters Motility and Thickness. For the left ventricular ejection fraction (LVEF), end-systolic volume (ESV) it was only for 250MBq, and 350MBq in the end-diastolic volume (EDV), while the simulated studies with 450 and 680MBq showed no statistically significant differences for global functional parameters: LVEF, EDV and ESV. Conclusion: The number of counts/pixel and, concomitantly, the total counts per simulation do not significantly interfere with the determination of gated-SPECT functional parameters, when using the administered average activity of 450MBq, corresponding to the 5.4MBq of the myocardium, for standard patient types.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Design Fifty out of 336 postmenopausal patients with chronic infection with the hepatitis C virus were selected. The non-inclusion criteria were other chronic or systemic liver diseases, severe vascular diseases, autoimmune diseases or malignant tumors. The patients were randomized into two groups: the HT group with 25 patients to be given transdermal hormone therapy (50 mu g estradiol plus 170 mu g norethisterone/day) and the control group with the other 25 patients (no medication). Hepatic tests (alanine aminotransferase, aspartate aminotransferase, gamma glutamyltransferase, total alkaline phosphatase, albumin, serum bilirubin) and hemostatic parameters (prothrombin time, factor V, fibrinogen) were evaluated at baseline and at 1, 4, 7 and 9 months of treatment. Results No significant changes in parameters were found in the comparison between the treated group and the controls, except for a decrease in total alkaline phosphatase (p = 0.002), presumably due to changes in bone remodelling. Conclusions There were no changes in liver function after a 9-month treatment with transdermal estradiol plus norethisterone in symptomatic postmenopausal patients with hepatitis C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new ambulatory method of monitoring physical activities in Parkinson's disease (PD) patients is proposed based on a portable data-logger with three body-fixed inertial sensors. A group of ten PD patients treated with subthalamic nucleus deep brain stimulation (STN-DBS) and ten normal control subjects followed a protocol of typical daily activities and the whole period of the measurement was recorded by video. Walking periods were recognized using two sensors on shanks and lying periods were detected using a sensor on trunk. By calculating kinematics features of the trunk movements during the transitions between sitting and standing postures and using a statistical classifier, sit-to-stand (SiSt) and stand-to-sit (StSi) transitions were detected and separated from other body movements. Finally, a fuzzy classifier used this information to detect periods of sitting and standing. The proposed method showed a high sensitivity and specificity for the detection of basic body postures allocations: sitting, standing, lying, and walking periods, both in PD patients and healthy subjects. We found significant differences in parameters related to SiSt and StSi transitions between PD patients and controls and also between PD patients with and without STN-DBS turned on. We concluded that our method provides a simple, accurate, and effective means to objectively quantify physical activities in both normal and PD patients and may prove useful to assess the level of motor functions in the latter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this study was to evaluate the efficiency and the effects of changes in parameters of chronic amygdala-hippocampal deep brain stimulation (AH-DBS) in mesial temporal lobe epilepsy (TLE). Eight pharmacoresistant patients, not candidates for ablative surgery, received chronic AH-DBS (130 Hz, follow-up 12-24 months): two patients with hippocampal sclerosis (HS) and six patients with non-lesional mesial TLE (NLES). The effects of stepwise increases in intensity (0-Off to 2 V) and stimulation configuration (quadripolar and bipolar), on seizure frequency and neuropsychological performance were studied. The two HS patients obtained a significant decrease (65-75%) in seizure frequency with high voltage bipolar DBS (≥1 V) or with quadripolar stimulation. Two out of six NLES patients became seizure-free, one of them without stimulation, suggesting a microlesional effect. Two NLES patients experienced reductions of seizure frequency (65-70%), whereas the remaining two showed no significant seizure reduction. Neuropsychological evaluations showed reversible memory impairments in two patients under strong stimulation only. AH-DBS showed long-term efficiency in most of the TLE patients. It is a valuable treatment option for patients who suffer from drug resistant epilepsy and who are not candidates for resective surgery. The effects of changes in the stimulation parameters suggest that a large zone of stimulation would be required in HS patients, while a limited zone of stimulation or even a microlesional effect could be sufficient in NLES patients, for whom the importance of the proximity of the electrode to the epileptogenic zone remains to be studied. Further studies are required to ascertain these latter observations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[cat] En aquest treball presentem un model per explicar el procés d’especialització vitícola assolit als municipis de la província de Barcelona, a mitjans del s. XIX,que cerca entendre com va sorgir històricament un avantatge comparatiu fruit d’un procés que esdevindria un dels punts de partida del procés d’industrialització a Catalunya. Els resultats confirmen els papers jugats pel impuls “Boserupià” de la població en un context d’intensificació de l’ús de la terra, i d’un impuls del mercat “Smithià” en un context d’expansió de la demanda per part de les economies atlàntiques. També es posa de manifest la importància de les dotacions agro-ecològiques i les condicions socioinstitucionals relacionades amb la desigualtat d’ingrés. La difusió de la vinya donà com a resultat unes comunitats rurals menys desiguals fins al 1820, tot i que aquesta desigualtat augmentà de nou a partir d'aleshores.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[cat] En aquest treball presentem un model per explicar el procés d’especialització vitícola assolit als municipis de la província de Barcelona, a mitjans del s. XIX,que cerca entendre com va sorgir històricament un avantatge comparatiu fruit d’un procés que esdevindria un dels punts de partida del procés d’industrialització a Catalunya. Els resultats confirmen els papers jugats pel impuls “Boserupià” de la població en un context d’intensificació de l’ús de la terra, i d’un impuls del mercat “Smithià” en un context d’expansió de la demanda per part de les economies atlàntiques. També es posa de manifest la importància de les dotacions agro-ecològiques i les condicions socioinstitucionals relacionades amb la desigualtat d’ingrés. La difusió de la vinya donà com a resultat unes comunitats rurals menys desiguals fins al 1820, tot i que aquesta desigualtat augmentà de nou a partir d'aleshores.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diuron is a ureic herbicide considered to have very low toxicity. The present study evaluated several aspects of reproductive toxicity of diuron in adult male rats. Diuron was diluted in corn oil and administered by oral gavage to groups of 18-20 rats at doses of 0, 125 or 250 mg/kg per day for 30 days; the control group received only the corn oil vehicle. At the end of the treatment period, approximately half the animals from each group were assigned to one of two terminal assessment lines: (1) reproductive organ, liver and kidney weights; measurement of diuron concentrations in liver and kidney; plasma testosterone determinations; evaluation of daily sperm production per testis; sperm number and sperm transit time in the epididymis; or (2) sexual behavior assessment during cohabitation with a receptive female; fertility and pregnancy outcome after natural mating; testicular, epididymal, kidney and liver histopathology; sperm morphology. After 30 days of oral diuron treatment, there were no treatment-related changes in body weights, but dose-related diuron residues were detected in the liver of all treated rats and absolute and relative liver weights were increased in both groups. There were no statistically significant differences between the treated and control groups obtained in plasma testosterone concentrations, or in parameters of daily sperm production, sperm reserves in the epididymis, sperm morphology or measured components of male sexual behavior. on the other hand, the number of fetuses in the litters from diuron-treated rats was slightly smaller than litters from control rats. Therefore, although the results did not indicate that diuron exposure resulted in direct male reproductive toxicity in the rat, they suggest that additional studies should be undertaken to investigate the possible effects on fertility and reproductive performance. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the existence of homoclic solutions for reversible Hamiltonian systems taking the family of differential equations u(iv) + au - u +f(u, b) = 0 as a model, where fis an analytic function and a, b real parameters. These equations are important in several physical situations such as solitons and in the existence of finite energy stationary states of partial differential equations, but no assumptions of any kind of discrete symmetry is made and the analysis here developed can be extended to others Hamiltonian systems and successfully employed in situations where standard methods fail. We reduce the problem of computing these orbits to that of finding the intersection of the unstable manifold with a suitable set and then apply it to concrete situations. We also plot the homoclinic values configuration in parameters space, giving a picture of the structural distribution and a geometrical view of homoclinic bifurcations. (c) 2005 Published by Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on our studies of the stability of model peptide-resin linkage in acid media, we previously proposed a rule for resin selection and a final cleavage protocol applicable to the N-alpha-tert-butyloxycarbonyl (Boc)-peptide synthesis strategy. We found that incorrect choices resulted in decreases in the final synthesis yield, which is highly dependent on the peptide sequence, of as high as 30%. The present paper continues along this line of research but examines the N-alpha-9-fluorenylmethyloxycarbonyl (Fmoc)-synthesis strategy. The vasoactive peptide angiotensin II (All, DRVYIHPF) and its [Gly(8)]-All analogue were selected as model peptide resins. Variations in parameters such as the type of spacer group (linker) between the peptide backbone and the resin, as well as in the final acid cleavage protocol, were evaluated. The same methodology employed for the Boc strategy was used in order to establish rules for selection of the most appropriate linker-resin conjugate or of the peptide cleavage method, depending on the sequence to be assembled. The results obtained after treatment with four cleavage solutions and with four types of linker groups indicate that, irrespective of the circumstance, it is not possible to achieve complete removal of the peptide chains from the resin. Moreover, the Phe-attaching peptide at the C-terminal yielded far less cleavage (50-60%.) than that observed with the Gly-bearing sequences at the same position (70-90%). Lastly, the fastest cleavage occurred with reagent K acid treatment and when the peptide was attached to the Wang resin.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.