951 resultados para Space-time analysis
Resumo:
This note considers continuous-time Markov chains whose state space consists of an irreducible class, C, and an absorbing state which is accessible from C. The purpose is to provide results on mu-invariant and mu-subinvariant measures where absorption occurs with probability less than one. In particular, the well-known premise that the mu-invariant measure, m, for the transition rates be finite is replaced by the more natural premise that m be finite with respect to the absorption probabilities. The relationship between mu-invariant measures and quasi-stationary distributions is discussed. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
Testing ecological models for management is an increasingly important part of the maturation of ecology as an applied science. Consequently, we need to work at applying fair tests of models with adequate data. We demonstrate that a recent test of a discrete time, stochastic model was biased towards falsifying the predictions. If the model was a perfect description of reality, the test falsified the predictions 84% of the time. We introduce an alternative testing procedure for stochastic models, and show that it falsifies the predictions only 5% of the time when the model is a perfect description of reality. The example is used as a point of departure to discuss some of the philosophical aspects of model testing.
Resumo:
The convection-dispersion model and its extended form have been used to describe solute disposition in organs and to predict hepatic availabilities. A range of empirical transit-time density functions has also been used for a similar purpose. The use of the dispersion model with mixed boundary conditions and transit-time density functions has been queried recently by Hisaka and Sugiyanaa in this journal. We suggest that, consistent with soil science and chemical engineering literature, the mixed boundary conditions are appropriate providing concentrations are defined in terms of flux to ensure continuity at the boundaries and mass balance. It is suggested that the use of the inverse Gaussian or other functions as empirical transit-time densities is independent of any boundary condition consideration. The mixed boundary condition solutions of the convection-dispersion model are the easiest to use when linear kinetics applies. In contrast, the closed conditions are easier to apply in a numerical analysis of nonlinear disposition of solutes in organs. We therefore argue that the use of hepatic elimination models should be based on pragmatic considerations, giving emphasis to using the simplest or easiest solution that will give a sufficiently accurate prediction of hepatic pharmacokinetics for a particular application. (C) 2000 Wiley-Liss Inc. and the American Pharmaceutical Association J Pharm Sci 89:1579-1586, 2000.
Resumo:
We perform a quantum-mechanical analysis of the pendular cavity, using the positive-P representation, showing that the quantum state of the moving mirror, a macroscopic object, has noticeable effects on the dynamics. This system has previously been proposed as a candidate for the quantum-limited measurement of small displacements of the mirror due to radiation pressure, for the production of states with entanglement between the mirror and the field, and even for superposition states of the mirror. However, when we treat the oscillating mirror quantum mechanically, we find that it always oscillates, has no stationary steady state, and exhibits uncertainties in position and momentum which are typically larger than the mean values. This means that previous linearized fluctuation analyses which have been used to predict these highly quantum states are of limited use. We find that the achievable accuracy in measurement is fat, worse than the standard quantum limit due to thermal noise, which, for typical experimental parameters, is overwhelming even at 2 mK
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Wavelet correlation between subjects: A time-scale data driven analysis for brain mapping using fMRI
Resumo:
Functional magnetic resonance imaging (fMRI) based on BOLD signal has been used to indirectly measure the local neural activity induced by cognitive tasks or stimulation. Most fMRI data analysis is carried out using the general linear model (GLM), a statistical approach which predicts the changes in the observed BOLD response based on an expected hemodynamic response function (HRF). In cases when the task is cognitively complex or in cases of diseases, variations in shape and/or delay may reduce the reliability of results. A novel exploratory method using fMRI data, which attempts to discriminate between neurophysiological signals induced by the stimulation protocol from artifacts or other confounding factors, is introduced in this paper. This new method is based on the fusion between correlation analysis and the discrete wavelet transform, to identify similarities in the time course of the BOLD signal in a group of volunteers. We illustrate the usefulness of this approach by analyzing fMRI data from normal subjects presented with standardized human face pictures expressing different degrees of sadness. The results show that the proposed wavelet correlation analysis has greater statistical power than conventional GLM or time domain intersubject correlation analysis. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Objective: Micro RNA (miRNA) is a class of small noncoding RNA that plays a major role in the regulation of gene expression, which has been related to cancer behavior. The possibility of analyzing miRNA from the archives of pathology laboratories is exciting, as it allows for large retrospective studies. Formalin is the most common fixative used in the surgical pathology routine, and its promotion of nucleic acid degradation is well known. Our aim is to compare miRNA profiles from formalin-fixed paraffin embedded (FFPE) tissues with fresh-frozen prostate cancer tissues. Methods: The expression of 14 miRNAs was determined by quantitative real time polymerase chain reaction (qRT-PCR) in 5 paired fresh-frozen and FFPE tissues, which were representative of prostate carcinoma. Results: There was a very good correlation of the miRNA expression of miR-let7c and miR-32 between the fresh-frozen and FFPE tissues, with Pearson`s correlation coefficients of 0.927 (P = 0.023) and 0.960 (P = 0.010), respectively. For the remaining miRNAs, the correlation was good with Spearman correlation coefficient of 0.638 (P < 0.001). Conclusion: Analysis of miRNAs from routinely processed and stored FFPE prostate tissue is feasible for some miRNAs using qRT-PCR. Further studies should be conducted to confirm the reliability of using stock tissues for miRNA expression determination. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
The prognosis of glioblastomas is still extremely poor and the discovery of novel molecular therapeutic targets can be important to optimize treatment strategies. Gene expression analyses comparing normal and neoplastic tissues have been used to identify genes associated with tumorigenesis and potential therapeutic targets. We have used this approach to identify differentially expressed genes between primary glioblastomas and non-neoplastic brain tissues. We selected 20 overexpressed genes related to cell cycle, cellular movement and growth, proliferation and cell-to-cell signaling and analyzed their expression levels by real time quantitative PCR in cDNA obtained from microdissected fresh tumor tissue from 20 patients with primary glioblastomas and from 10 samples of non-neoplastic white matter tissue. The gene expression levels were significantly higher in glioblastomas than in non-neoplastic white matter in 18 out of 20 genes analyzed: P < 0.00001 for CDKN2C, CKS2, EEF1A1, EMP3, PDPN, BNIP2, CA12, CD34, CDC42EP4, PPIE, SNAI2, GDF15 and MMP23b; and NFIA (P: 0.0001), GPS1 (P: 0.0003), LAMA1 (P: 0.002), STIM1 (P: 0.006), and TASP1 (P: 0.01). Five of these genes are located in contiguous loci at 1p31-36 and 2 at 17q24-25 and 8 of them encode surface membrane proteins. PDPN and CD34 protein expression were evaluated by immunohistochemistry and they showed concordance with the PCR results. The present results indicate the presence of 18 overexpressed genes in human primary glioblastomas that may play a significant role in the pathogenesis of these tumors and that deserve further functional investigation as attractive candidates for new therapeutic targets.
Resumo:
The critically endangered black-faced lion tamarin, Leontopithecus caissara, has a restricted geographical distribution consisting of small mainland and island populations, each with distinct habitats in coastal southeastern Brazil. Necessary conservation management actions require an assessment of whether differences in habitats are reflected in use of space by the species. We studied two tamarin groups on the mainland at Sao Paulo state between August 2005 and March 2007, and compared the results with data from Superagui Island. Three home range estimators were used: minimum convex polygon (MCP), Kernel, and the new technique presented dissolved monthly polygons (DMP). These resulted, respectively, in home ranges of 345, 297, and 282 ha for the 12-month duration of the study. Spatial overlap of mainland groups was extensive, whereas temporal overlap was not, a pattern that indicates resource partitioning is an important strategy to avoid intraspecific competition. L. caissara large home ranges seem to be dynamic, with constant incorporation of new areas and abandonment of others through time. The main difference between mainland and island groups is the amount and variety of sleeping sites. A better understanding of the home range sizes, day range lengths, and territorial behavior of this species will aid in developing better management strategies for its protection. Additionally, the presented DMP protocol is a useful improvement over the MCP method as it results in more realistic home range sizes for wildlife species. Am. J. Primatol. 73: 1114-1126, 2011. (C) 2011 Wiley Periodicals, Inc.
Resumo:
Introduction: This ex vivo study evaluated the heat release, time required, and cleaning efficacy of MTwo (VDW, Munich, Germany) and ProTaper Universal Retreatment systems (Dentsply/Maillefer, Ballaigues, Switzerland) and hand instrumentation in the removal of filling material. Methods: Sixty single-rooted human teeth with a single straight canal were obturated with gutta-percha and zinc oxide and eugenol-based cement and randomly allocated to 3 groups (n = 20). After 30-day storage at 37 degrees C and 100% humidity, the root fillings were removed using ProTaper UR, MTwo R, or hand files. Heat release, time required, and cleaning efficacy data were analyzed statistically (analysis of variance and the Tukey test, alpha = 0.05). Results: None of the techniques removed the root fillings completely. Filling material removal with ProTaper UR was faster but caused more heat release. Mtwo R produced less heat release than the other techniques but was the least efficient in removing gutta-percha/sealer. Conclusions: ProTaper UR and MTwo R caused the greatest and lowest temperature increase on root surface, respectively; regardless of the type of instrument, more heat was released in the cervical third. Pro Taper UR needed less time to remove fillings than MTwo R. All techniques left filling debris in the root canals. (I Endod 2010;36:1870-1873)
Resumo:
This study compared ultrasonic chemical vapor deposition (CVD)-coated tip (CVDentus #8.1117-1; Clorovale Diamantes Ind. e Com. Ltda Epp, Sao Jose dos Campos, SP, Brazil) versus high-speed (#FG700L) and low-speed (#699) carbide burs for apicoectomy, evaluating the time required for resection and analyzing the root-end surfaces by scanning electron microscopy. Thirty extracted human premolars had the canals instrumented and obturated and were randomly assigned to 3 groups (n = 10), according to the instrument used for root-end resection. The time required for resection of the apical 2 mm of each root was recorded. The resected apical segments were dried, sputter coated with gold, and examined with a scanning electron microscope at X 350 magnification. A four-point (0-3) scoring system was used to evaluate the apical surface smoothness. The results were analyzed statistically by the Kruskal-Wallis test and two-by-two comparisons analyses were performed using the Miller test. The significance level was set at 5%. Root-end resection with the high-speed bur was significantly faster (p < 0.05) compared with the low-speed bur and CVD tip. The carbide burs produced significantly smoother root-end surfaces than the CVD tip (p < 0.05). The low-speed bur produced the smoothest root-end surfaces, whereas the roughest and most irregular root ends (p < 0.05) were obtained with the CVD tip. However, no statistically significant difference (p > 0.05) was found between the high- and low-speed burs regarding the surface roughness of the resected root ends (p > 0.05). In conclusion, under the tested conditions, ultrasonic root-end resection took a longer time and resulted in rougher surfaces compared with the use of carbide burs at both high and low speed. (J Endod 2009;35:265-268)
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.
Resumo:
Objectives: To evaluate the bonding interface in experimentally weakened roots reinforced with adhesive restorative materials and quartz fibre posts, varying the light-exposure time of the composite resin used for root reinforcement. Methods: Twelve extracted human maxillary incisors teeth were used. The crowns were removed and the roots were endodontically treated. After post space preparation, the roots were assigned to four groups. The thickness of the root dentine was reduced and adhesively restored with composite resin light-activated through a translucent fibre post for either 40 s (group 1), 80 s (group 2) or 120 s (group 3). In the case of control (group 4), the roots were not weakened. One day after post cementation, the specimens were sectioned transversally in three slices and processed for scanning electron microscopic analysis to observe bonding interface formation, quality of the hybrid layer and density of resin tags using a four-step scale method. Results: Formation of a hybrid layer and resin tags were evident in all groups. There was no statistically (p > 0.05) significant difference between the regions analysed in each group (Friedman test) and between groups in each section depth (Kruskal-Wallis test). Furthermore, comparison of the flared/reinforced groups showed that the different time;; used for composite resin cure did not affect the results significantly (Kruskal-Wallis test, p = 0.2139). Conclusions: Different light-exposure times used for composite resin polymerisation during root canal reinforcement did not affect significantly the formation and quality of the dentine/adhesive/composite resin bonding interface. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: The purpose of this in vitro study was to evaluate the Vickers hardness (VHN) of a Light Core (Bisco) composite resin after root reinforcement, according to the light exposure time, region of intracanal reinforcement and lateral distance from the light-transmitting fibre post. Methods: Forty-five 17-mm long roots were used. Twenty-four hours after obturation, the root canals were emptied to a depth of 12 mm and the root dentine was artificially flared to produce a 1 mm space between the fibre post and the canal walls. The roots were bulk restored with the composite resin, which was photoactivated through the post for 40 s (G1, control), 80 s (G2) or 120 s (G3). Twenty-four hours after post-cementation, the specimens were sectioned transversely into three slices at depths of 2, 6 and 10 mm, corresponding to the coronal, middle and apical regions of the reinforced root. Composite VHN was measured as the average of three indentations (100 g/15 s) in each region at lateral distances of 50, 200 and 350 mu m from the cement/post-interface. Results: Three-way analysis of variance (alpha = 0.05) indicated that the factors time, region and distance influenced the hardness and that the interaction time x region was statistically significant (p = 0.0193). Tukey`s test showed that the mean VHN values for G1 (76.37 +/- 8.58) and G2 (74.89 +/- 6.28) differed significantly from that for G3 (79.5 +/- 5.18). Conclusions: Composite resin hardness was significantly lower in deeper regions of root reinforcement and in lateral areas distant from the post. Overall, a light exposure time of 120 s provided higher composite hardness than the shorter times (40 and 80 s). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The catalytic properties of enzymes are usually evaluated by measuring and analyzing reaction rates. However, analyzing the complete time course can be advantageous because it contains additional information about the properties of the enzyme. Moreover, for systems that are not at steady state, the analysis of time courses is the preferred method. One of the major barriers to the wide application of time courses is that it may be computationally more difficult to extract information from these experiments. Here the basic approach to analyzing time courses is described, together with some examples of the essential computer code to implement these analyses. A general method that can be applied to both steady state and non-steady-state systems is recommended. (C) 2001 academic Press.