939 resultados para Mean occupancy time
Resumo:
The aim of this study was to determine the minimum conditions of wetness duration and mean temperature required for Fusarium head blight infection in wheat. The weather model developed by Zoldan (2008) was tested in field experiments for two wheat cultivars grown in 2005 (five sowing dates) and 2006 (six sowing dates) in 10 m² plots with three replicates. The disease was assessed according to head incidence (HI), spikelet incidence (SI), and the interaction between these two methods was called head blight severity (HBS). Starting at the beginning of anthesis, air temperature and head wetness duration were daily recorded with an automatic weather station. With the combination of these two factors, a weather favorability table was built for the disease occurrence. Starting on the day of flowering beginning (1 - 5% fully exserted anthers), the sum of daily values for infection favorability (SDVIF) was calculated by means of a computer program, according to Zoldan (2008) table. The initial symptoms of the disease were observed at 3.7% spikelet incidence, corresponding to 2.6 SVDFI. The infection occurs in wheat due to rainfall which results in spike wetting of > 61.4 h duration. Rainfall events forecast can help time fungicide application to control FHB. The name of this alert system is proposed as UPF-scab alert.
Resumo:
When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.
Resumo:
Autonomic neuropathy is a frequent complication of diabetes associated with higher morbidity and mortality in symptomatic patients, possibly because it affects autonomic regulation of the sinus node, reducing heart rate (HR) variability which predisposes to fatal arrhythmias. We evaluated the time course of arterial pressure and HR and indirectly of autonomic function (by evaluation of mean arterial pressure (MAP) variability) in rats (164.5 ± 1.7 g) 7, 14, 30 and 120 days after streptozotocin (STZ) injection, treated with insulin, using measurements of arterial pressure, HR and MAP variability. HR variability was evaluated by the standard deviation of RR intervals (SDNN) and root mean square of successive difference of RR intervals (RMSSD). MAP variability was evaluated by the standard deviation of the mean of MAP and by 4 indices (P1, P2, P3 and MN) derived from the three-dimensional return map constructed by plotting MAPn x [(MAPn+1) - (MAPn)] x density. The indices represent the maximum concentration of points (P1), the longitudinal axis (P2), and the transversal axis (P3) and MN represents P1 x P2 x P3 x 10-3. STZ induced increased urinary glucose in diabetic (D) rats compared to controls (C). Seven days after STZ, diabetes reduced resting HR from 380.6 ± 12.9 to 319.2 ± 19.8 bpm, increased HR variability, as demonstrated by increased SDNN, from 11.77 ± 1.67 to 19.87 ± 2.60 ms, did not change MAP, and reduced P1 from 61.0 ± 5.3 to 51.5 ± 1.8 arbitrary units (AU), P2 from 41.3 ± 0.3 to 29.0 ± 1.8 AU, and MN from 171.1 ± 30.2 to 77.2 ± 9.6 AU of MAP. These indices, as well as HR and MAP, were similar for D and C animals 14, 30 and 120 days after STZ. Seven-day rats showed a negative correlation of urinary glucose with resting HR (r = -0.76, P = 0.03) as well as with the MN index (r = -0.83, P = 0.01). We conclude that rats with short-term diabetes mellitus induced by STZ presented modified autonomic control of HR and MAP which was reversible. The metabolic control may influence these results, suggesting that insulin treatment and a better metabolic control in this model may modify arterial pressure, HR and MAP variability
Resumo:
In order to determine the effect of ursodeoxycholic acid on nonalcoholic fatty liver disease, 30 patients with body mass indices higher than 25, serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST) or gamma-glutamyltransferase (gamma-GT) at least more than 1.5 times the upper limit of normality, and hepatic steatosis demonstrated by ultrasonography were randomized into two groups of 15 patients to receive placebo or 10 mg kg-1 day-1 ursodeoxycholic acid for three months. Abdominal computed tomography was performed to quantify hepatic fat content, which was significantly correlated with histological grading of steatosis (r s = -0.83, P < 0.01). Patient body mass index remained stable for both groups throughout the study, but a significant reduction in mean (± SEM) serum levels of ALT, AST and gamma-GT was observed only in the treated group (ALT = 81.2 ± 9.7, 44.8 ± 7.7, 48.1 ± 7.7 and 52.2 ± 6.3 IU/l at the beginning and after the first, second and third months, respectively, N = 14, P < 0.05). For the placebo group ALT values were 66.4 ± 9.8, 54.5 ± 7, 60 ± 7.6 and 43.7 ± 5 IU/l, respectively. No alterations in hepatic lipid content were observed in these patients by computed tomography examination (50.2 ± 4.2 Hounsfield units (HU) at the beginning versus 51.1 ± 4.1 HU at the third month). These results show that ursodeoxycholic acid is able to reduce serum levels of hepatic enzymes in patients with nonalcoholic fatty liver disease, but this effect is not related to modifications in liver fat content.
Resumo:
In contrast to most developed countries, most patients with primary hyperparathyroidism in Brazil are still symptomatic at diagnosis. However, we have been observing a change in this pattern, especially in the last few years. We evaluated 104 patients, 77 females and 27 males aged 11-79 years (mean: 54.4 years), diagnosed between 1985 and 2002 at a University Hospital. Diagnosis was made on the basis of clinical findings and of high total and/or ionized calcium levels, high or inappropriate levels of intact parathyroid hormone and of surgical findings in 80 patients. Patients were divided into three groups, i.e., patients diagnosed from 1985 to 1989, patients diagnosed from 1990 to 1994, and patients diagnosed from 1995 to 2002. The number of new cases diagnosed/year increased from 1.8/year in the first group to 6.0/year in the second group and 8.1/year in the third group. The first group comprised 9 patients (mean serum calcium ± SD, 13.6 ± 1.6 mg/dl), 8 of them (88.8%) defined as symptomatic. The second group comprised 30 patients (mean calcium ± SD, 12.2 ± 1.63 mg/dl), 22 of them defined as symptomatic (73.3%). The third group contained 65 patients (mean calcium 11.7 ± 1.1 mg/dl), 34 of them symptomatic (52.3%). Patients from the first group tended to be younger (mean ± SD, 43.0 ± 15 vs 55.1 ± 14.4 and 55.7 ± 17.3 years, respectively) and their mean serum calcium was significantly higher (P < 0.05). All of symptomatic patients independent of group had higher serum calcium levels (12.4 ± 1.53 mg/dl, N = 64) than asymptomatic patients (11.4 ± 1.0 mg/dl, N = 40). Our data showed an increase in the percentage of asymptomatic patients over the years in the number of primary hyperparathyroidism cases diagnosed. This finding may be due to an increased availability of diagnostic methods and/or to an increased awareness about the disease.
Resumo:
The objective of the present study was to determine the oral motor capacity and the feeding performance of preterm newborn infants when they were permitted to start oral feeding. This was an observational and prospective study conducted on 43 preterm newborns admitted to the Neonatal Intensive Care Unit of UFSM, RS, Brazil. Exclusion criteria were the presence of head and neck malformations, genetic disease, neonatal asphyxia, intracranial hemorrhage, and kernicterus. When the infants were permitted to start oral feeding, non-nutritive sucking was evaluated by a speech therapist regarding force (strong vs weak), rhythm (rapid vs slow), presence of adaptive oral reflexes (searching, sucking and swallowing) and coordination between sucking, swallowing and respiration. Feeding performance was evaluated on the basis of competence (defined by rate of milk intake, mL/min) and overall transfer (percent ingested volume/total volume ordered). The speech therapist's evaluation showed that 33% of the newborns presented weak sucking, 23% slow rhythm, 30% absence of at least one adaptive oral reflex, and 14% with no coordination between sucking, swallowing and respiration. Mean feeding competence was greater in infants with strong sucking fast rhythm. The presence of sucking-swallowing-respiration coordination decreased the days for an overall transfer of 100%. Evaluation by a speech therapist proved to be a useful tool for the safe indication of the beginning of oral feeding for premature infants.
Resumo:
Exercise training (Ex) has been recommended for its beneficial effects in hypertensive states. The present study evaluated the time-course effects of Ex without workload on mean arterial pressure (MAP), reflex bradycardia, cardiac and renal histology, and oxidative stress in two-kidney, one-clip (2K1C) hypertensive rats. Male Fischer rats (10 weeks old; 150–180 g) underwent surgery (2K1C or SHAM) and were subsequently divided into a sedentary (SED) group and Ex group (swimming 1 h/day, 5 days/week for 2, 4, 6, 8, or 10 weeks). Until week 4, Ex decreased MAP, increased reflex bradycardia, prevented concentric hypertrophy, reduced collagen deposition in the myocardium and kidneys, decreased the level of thiobarbituric acid-reactive substances (TBARS) in the left ventricle, and increased the catalase (CAT) activity in the left ventricle and both kidneys. From week 6 to week 10, however, MAP and reflex bradycardia in 2K1C Ex rats became similar to those in 2K1C SED rats. Ex effectively reduced heart rate and prevented collagen deposition in the heart and both kidneys up to week 10, and restored the level of TBARS in the left ventricle and clipped kidney and the CAT activity in both kidneys until week 8. Ex without workload for 10 weeks in 2K1C rats provided distinct beneficial effects. The early effects of Ex on cardiovascular function included reversing MAP and reflex bradycardia. The later effects of Ex included preventing structural alterations in the heart and kidney by decreasing oxidative stress and reducing injuries in these organs during hypertension.
Stochastic particle models: mean reversion and burgers dynamics. An application to commodity markets
Resumo:
The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.
Resumo:
By reporting his satisfaction with his job or any other experience, an individual does not communicate the number of utils that he feels. Instead, he expresses his posterior preference over available alternatives conditional on acquired knowledge of the past. This new interpretation of reported job satisfaction restores the power of microeconomic theory without denying the essential role of discrepancies between one’s situation and available opportunities. Posterior human wealth discrepancies are found to be the best predictor of reported job satisfaction. Static models of relative utility and other subjective well-being assumptions are all unambiguously rejected by the data, as well as an \"economic\" model in which job satisfaction is a measure of posterior human wealth. The \"posterior choice\" model readily explains why so many people usually report themselves as happy or satisfied, why both younger and older age groups are insensitive to current earning discrepancies, and why the past weighs more heavily than the present and the future.
Inference for nonparametric high-frequency estimators with an application to time variation in betas
Resumo:
We consider the problem of conducting inference on nonparametric high-frequency estimators without knowing their asymptotic variances. We prove that a multivariate subsampling method achieves this goal under general conditions that were not previously available in the literature. We suggest a procedure for a data-driven choice of the bandwidth parameters. Our simulation study indicates that the subsampling method is much more robust than the plug-in method based on the asymptotic expression for the variance. Importantly, the subsampling method reliably estimates the variability of the Two Scale estimator even when its parameters are chosen to minimize the finite sample Mean Squared Error; in contrast, the plugin estimator substantially underestimates the sampling uncertainty. By construction, the subsampling method delivers estimates of the variance-covariance matrices that are always positive semi-definite. We use the subsampling method to study the dynamics of financial betas of six stocks on the NYSE. We document significant variation in betas within year 2006, and find that tick data captures more variation in betas than the data sampled at moderate frequencies such as every five or twenty minutes. To capture this variation we estimate a simple dynamic model for betas. The variance estimation is also important for the correction of the errors-in-variables bias in such models. We find that the bias corrections are substantial, and that betas are more persistent than the naive estimators would lead one to believe.
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
In this paper the effectiveness of a novel method of computer assisted pedicle screw insertion was studied using testing of hypothesis procedure with a sample size of 48. Pattern recognition based on geometric features of markers on the drill has been performed on real time optical video obtained from orthogonally placed CCD cameras. The study reveals the exactness of the calculated position of the drill using navigation based on CT image of the vertebra and real time optical video of the drill. The significance value is 0.424 at 95% confidence level which indicates good precision with a standard mean error of only 0.00724. The virtual vision method is less hazardous to both patient and the surgeon
Resumo:
Path planning and control strategies applied to autonomous mobile robots should fulfil safety rules as well as achieve final goals. Trajectory planning applications should be fast and flexible to allow real time implementations as well as environment interactions. The methodology presented uses the on robot information as the meaningful data necessary to plan a narrow passage by using a corridor based on attraction potential fields that approaches the mobile robot to the final desired configuration. It employs local and dense occupancy grid perception to avoid collisions. The key goals of this research project are computational simplicity as well as the possibility of integrating this method with other methods reported by the research community. Another important aspect of this work consist in testing the proposed method by using a mobile robot with a perception system composed of a monocular camera and odometers placed on the two wheels of the differential driven motion system. Hence, visual data are used as a local horizon of perception in which trajectories without collisions are computed by satisfying final goal approaches and safety criteria