936 resultados para Mean Transit Time
Resumo:
Sildenafil slows down the gastric emptying of a liquid test meal in awake rats and inhibits the contractility of intestinal tissue strips. We studied the acute effects of sildenafil on in vivo intestinal transit in rats. Fasted, male albino rats (180-220 g, N = 44) were treated (0.2 mL, iv) with sildenafil (4 mg/kg) or vehicle (0.01 N HCl). Ten minutes later they were fed a liquid test meal (99m technetium-labeled saline) injected directly into the duodenum. Twenty, 30 or 40 min after feeding, the rats were killed and transit throughout the gastrointestinal tract was evaluated by progression of the radiotracer using the geometric center method. The effect of sildenafil on mean arterial pressure (MAP) was monitored in a separate group of rats (N = 14). Data (medians within interquartile ranges) were compared by the Mann-Whitney U-test. The location of the geometric center was significantly more distal in vehicle-treated than in sildenafil-treated rats at 20, 30, and 40 min after test meal instillation (3.3 (3.0-3.6) vs 2.9 (2.7-3.1); 3.8 (3.4-4.0) vs 2.9 (2.5-3.1), and 4.3 (3.9-4.5) vs 3.4 (3.2-3.7), respectively; P < 0.05). MAP was unchanged in vehicle-treated rats but decreased by 25% (P < 0.05) within 10 min after sildenafil injection. In conclusion, besides transiently decreasing MAP, sildenafil delays the intestinal transit of a liquid test meal in awake rats.
Resumo:
The objective of the present study was to determine the oral motor capacity and the feeding performance of preterm newborn infants when they were permitted to start oral feeding. This was an observational and prospective study conducted on 43 preterm newborns admitted to the Neonatal Intensive Care Unit of UFSM, RS, Brazil. Exclusion criteria were the presence of head and neck malformations, genetic disease, neonatal asphyxia, intracranial hemorrhage, and kernicterus. When the infants were permitted to start oral feeding, non-nutritive sucking was evaluated by a speech therapist regarding force (strong vs weak), rhythm (rapid vs slow), presence of adaptive oral reflexes (searching, sucking and swallowing) and coordination between sucking, swallowing and respiration. Feeding performance was evaluated on the basis of competence (defined by rate of milk intake, mL/min) and overall transfer (percent ingested volume/total volume ordered). The speech therapist's evaluation showed that 33% of the newborns presented weak sucking, 23% slow rhythm, 30% absence of at least one adaptive oral reflex, and 14% with no coordination between sucking, swallowing and respiration. Mean feeding competence was greater in infants with strong sucking fast rhythm. The presence of sucking-swallowing-respiration coordination decreased the days for an overall transfer of 100%. Evaluation by a speech therapist proved to be a useful tool for the safe indication of the beginning of oral feeding for premature infants.
Resumo:
Exercise training (Ex) has been recommended for its beneficial effects in hypertensive states. The present study evaluated the time-course effects of Ex without workload on mean arterial pressure (MAP), reflex bradycardia, cardiac and renal histology, and oxidative stress in two-kidney, one-clip (2K1C) hypertensive rats. Male Fischer rats (10 weeks old; 150–180 g) underwent surgery (2K1C or SHAM) and were subsequently divided into a sedentary (SED) group and Ex group (swimming 1 h/day, 5 days/week for 2, 4, 6, 8, or 10 weeks). Until week 4, Ex decreased MAP, increased reflex bradycardia, prevented concentric hypertrophy, reduced collagen deposition in the myocardium and kidneys, decreased the level of thiobarbituric acid-reactive substances (TBARS) in the left ventricle, and increased the catalase (CAT) activity in the left ventricle and both kidneys. From week 6 to week 10, however, MAP and reflex bradycardia in 2K1C Ex rats became similar to those in 2K1C SED rats. Ex effectively reduced heart rate and prevented collagen deposition in the heart and both kidneys up to week 10, and restored the level of TBARS in the left ventricle and clipped kidney and the CAT activity in both kidneys until week 8. Ex without workload for 10 weeks in 2K1C rats provided distinct beneficial effects. The early effects of Ex on cardiovascular function included reversing MAP and reflex bradycardia. The later effects of Ex included preventing structural alterations in the heart and kidney by decreasing oxidative stress and reducing injuries in these organs during hypertension.
Stochastic particle models: mean reversion and burgers dynamics. An application to commodity markets
Resumo:
The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.
Resumo:
One way of exploring the power of sound in the experience and constitution of space is through the phenomenon of personal listening devices (PLDs) in public environments. In this thesis, I draw from in-depth interviews with eleven Brock University students in S1. Catharines, Ontario, to show how PLDs (such as MP3 players like the iPod) are used to create personalized soundscapes and mediate their public transit journeys. I discuss how my interview participants experience the space-time of public transit, and show how PLDs are used to mediate these experiences in acoustic and non-acoustic ways. PLD use demonstrates that acoustic and environmental experiences are co-constitutive, which highlights a kinaesthetic quality of the transit-space. My empirical findings show that PLDs transform space, particularly by overlapping public and private appropriations of the bus. I use these empirical findings to discuss the PLD phenomenon in the theoretical context of spatiality, and more specifically, acoustic space. J develop the ontological notion of acoustic space, stating that space shares many of the properties of sound, and argue that sound is a rich epistemological tool for understanding and explaining our everyday experiences.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Resumo:
Ce mémoire de maîtrise traite de quatre pièces de l’auteur français Bernard-Marie Koltès. Résolument axé sur l’étude du texte écrit, il vise dans un premier temps l’analyse d’un ensemble de désajustements et d’une culture de l’équivoque dans le traitement du lieu, du temps et de l’identité. Le premier but de ces analyses est l’approfondissement de certaines avenues déjà investies par la critique (marginalité des lieux et des personnages, présence constante de la violence et de la mort, etc.), dans la recherche d’une signification globale à un ensemble de pratiques textuelles liées à la notion de limite ou de frontière. Opérant un changement de perspective, la seconde partie de l’étude s’attarde à la violence ainsi qu’aux modalités et implications de sa mise en texte dans une étude croisée de l’acte créateur et de la pratique de lecture. À partir de la théorie du sacrifice élaborée par René Girard dans La violence et le sacré, et faisant dialoguer certains textes importants d’auteurs divers (Aristote, Nietzsche, Artaud, Foucault, Derrida et Adorno), l’analyse vise à inscrire la littérarité du texte koltésien dans une entreprise plus vaste ayant à voir avec la violence, sa régulation et sa diffusion. Au carrefour des études théâtrales, de la littérature et de la philosophie, cette réflexion cherche à concevoir le théâtre de Koltès (particulièrement Roberto Zucco) comme un sacrifice rituel, afin de mieux en comprendre le rapport au réel et certaines de ses particularités du point de vue du mécanisme cathartique.
Inference for nonparametric high-frequency estimators with an application to time variation in betas
Resumo:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
In this paper the effectiveness of a novel method of computer assisted pedicle screw insertion was studied using testing of hypothesis procedure with a sample size of 48. Pattern recognition based on geometric features of markers on the drill has been performed on real time optical video obtained from orthogonally placed CCD cameras. The study reveals the exactness of the calculated position of the drill using navigation based on CT image of the vertebra and real time optical video of the drill. The significance value is 0.424 at 95% confidence level which indicates good precision with a standard mean error of only 0.00724. The virtual vision method is less hazardous to both patient and the surgeon
Resumo:
Introduction: Comprehensive undergraduate education in clinical sciences is grounded on activities developed during clerkships. To implement the credits system we must know how these experiences take place. Objectives: to describe how students spend time in clerkships, how they assess the educative value of activities and the enjoyment it provides. Method: We distributed a form to a random clustered sample of a 100 students coursing clinical sciences, designed to record the time spent, and to assess the educative value and the grade of enjoyment of the activities in clerkship during a week. Data were registered and analyzed on Excel® 98 and SPSS. Results: mean time spent by students in clerkship activities on a day were 10.8 hours. Of those, 7.3 hours (69%) were spent in formal education activities. Patient care activities with teachers occupied the major proportion of time (15.4%). Of the teaching and learning activities in a week, 28 hours (56%) were spent in patient care activities and 22.4 hours (44.5%) were used in independent academic work. The time spent in teaching and learning activities correspond to 19 credits of a semester of 18 weeks. The activities assessed as having the major educational value were homework activities (4.6) and formal education activities (4.5). The graded as most enjoyable were extracurricular activities, formal educational activities and independent academic work. Conclusion: our students spend more time in activities with patients than the reported in literature. The attending workload of our students is greater than the one reported in similar studies.
Resumo:
In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.
Resumo:
Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.