983 resultados para Mean-Reverting Process
Resumo:
We obtain the exact analytical expression, up to a quadrature, for the mean exit time, T(x,v), of a free inertial process driven by Gaussian white noise from a region (0,L) in space. We obtain a completely explicit expression for T(x,0) and discuss the dependence of T(x,v) as a function of the size L of the region. We develop a new method that may be used to solve other exit time problems.
Resumo:
Extreme times techniques, generally applied to nonequilibrium statistical mechanical processes, are also useful for a better understanding of financial markets. We present a detailed study on the mean first-passage time for the volatility of return time series. The empirical results extracted from daily data of major indices seem to follow the same law regardless of the kind of index thus suggesting an universal pattern. The empirical mean first-passage time to a certain level L is fairly different from that of the Wiener process showing a dissimilar behavior depending on whether L is higher or lower than the average volatility. All of this indicates a more complex dynamics in which a reverting force drives volatility toward its mean value. We thus present the mean first-passage time expressions of the most common stochastic volatility models whose approach is comparable to the random diffusion description. We discuss asymptotic approximations of these models and confront them to empirical results with a good agreement with the exponential Ornstein-Uhlenbeck model.
Resumo:
BACKGROUND: We assessed the impact of a multicomponent worksite health promotion program for0 reducing cardiovascular risk factors (CVRF) with short intervention, adjusting for regression towards the mean (RTM) affecting such nonexperimental study without control group. METHODS: A cohort of 4,198 workers (aged 42 +/- 10 years, range 16-76 years, 27% women) were analyzed at 3.7-year interval and stratified by each CVRF risk category (low/medium/high blood pressure [BP], total cholesterol [TC], body mass index [BMI], and smoking) with RTM and secular trend adjustments. Intervention consisted of 15 min CVRF screening and individualized counseling by health professionals to medium- and high-risk individuals, with eventual physician referral. RESULTS: High-risk groups participants improved diastolic BP (-3.4 mm Hg [95%CI: -5.1, -1.7]) in 190 hypertensive patients, TC (-0.58 mmol/l [-0.71, -0.44]) in 693 hypercholesterolemic patients, and smoking (-3.1 cig/day [-3.9, -2.3]) in 808 smokers, while systolic BP changes reflected RTM. Low-risk individuals without counseling deteriorated TC and BMI. Body weight increased uniformly in all risk groups (+0.35 kg/year). CONCLUSIONS: In real-world conditions, short intervention program participants in high-risk groups for diastolic BP, TC, and smoking improved their CVRF, whereas low-risk TC and BMI groups deteriorated. Future programs may include specific advises to low-risk groups to maintain a favorable CVRF profile.
Resumo:
Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.
Resumo:
Vaatimustenhallinnan alue on hyvin kompleksinen. Sen terminologia on moninaista ja samat termit voivat tarkoittaa eri asioita eri ihmisille. Tämän työn tarkoituksena on selkeyttää vaatimustenhallinnan aluetta. Se vastaa kysymyksiin kuten, mitä vaatimustenhallinta on ja miten sitä voidaan tehdä. Työ keskittyy vaatimusten analysoinnin ja validoinnin alueisiin, joten tältä osin se vastaa myös tarkempiin kysymyksiin kuten, miten koottujen vaatimusten jäljitettävyyttä, dokumentointia, analysointia ja validointia voidaan tehdä. Tämän työn kautta vaatimustenhallinta voidaan esitellä yritykselle ja sen eri osat voivat saada saman käsityksen vaatimustenhallinnasta. Tutkimus esittelee vaatimustenhallinnan prosessina, joka pitää sisällään vaatimusten jäljitettävyyden, vaatimusten dokumentoinnin, vaatimusten muutoksenhallinnan ja vaatimusmäärityksen. Vaatimusmääritys voidaan edelleen jakaa vaatimusten koostamiseen, analysointiin ja neuvotteluun sekä validointiin. Työssä esitellään geneerinen vaatimustenhallinnan prosessimalli. Mallin avulla näytetään, että vaatimustenhallinta on jatkuva prosessi, jossa kaikki aktiviteetit ovat kytköksissä toisiinsa. Näitä aktiviteettejä suoritetaan enemmän tai vähemmän samanaikaisesti. Malli esitetään geneerisessä muodossa, jotta se olisi hyödynnettävissä systeemi- ja tuotekehitys projekteissa sekä sisäisissä kehitysprojekteissa. Se kertoo, että vaatimukset tulisi jalostaa niin aikaisin, kuin mahdollista, jotta muutoksien määrä kehitystyön myöhemmissä vaiheissa voitaisiin minimoida. Jotkin muutokset eivät ole vältettävissä, joten muutoksenhallinnan tueksi tulisi kehittää jäljitettävyyskäsikirja ja jäljitettävyyskäytännöt. Vaatimustenhallintaa tarkastellaan meneillään olevassa kehitysprojektissa. Tarkastelussa tutkitaan, mitä vaatimustenhallinnan toimintatapoja sekä analysointi- ja validointimetodeja käytetään ja mitä voitaisiin tehdä vaatimustenhallinnan parantamiseksi projektissa.
Resumo:
In the last two centuries, papers have been published including measurements of the germination process. High diversity of mathematical expressions has made comparisons between papers and some times the interpretation of results difficult. Thus, in this paper is included a review about measurements of the germination process, with an analysis of the several mathematical expressions included in the specific literature, recovering the history, sense, and limitations of some germination measurements. Among the measurements included in this paper are the germinability, germination time, coefficient of uniformity of germination (CUG), coefficient of variation of the germination time (CVt), germination rate (mean rate, weighted mean rate, coefficient of velocity, germination rate of George, Timsons index, GV or Czabators index; Throneberry and Smiths method and its adaptations, including Maguires rate; ERI or emergence rate index, germination index, and its modifications), uncertainty associated to the distribution of the relative frequency of germination (U), and synchronization index (Z). The limits of the germination measurements were included to make the interpretation and decisions during comparisons easier. Time, rate, homogeneity, and synchrony are aspects that can be measured, informing the dynamics of the germination process. These characteristics are important not only for physiologists and seed technologists, but also for ecologists because it is possible to predict the degree of successful of a species based on the capacity of their harvest seed to spread the germination through time, permitting the recruitment in the environment of some part of the seedlings formed.
Resumo:
Microparticles obtained by complex coacervation were crosslinked with glutaraldehyde or with transglutaminase and dried using freeze drying or spray drying. Moist samples presented Encapsulation Efficiency (%EE) higher than 96%. The mean diameters ranged from 43.7 ± 3.4 to 96.4 ± 10.3 µm for moist samples, from 38.1 ± 5.36 to 65.2 ± 16.1 µm for dried samples, and from 62.5 ± 7.5 to 106.9 ± 26.1 µm for rehydrated microparticles. The integrity of the particles without crosslinking was maintained when freeze drying was used. After spray drying, only crosslinked samples were able to maintain the wall integrity. Microparticles had a round shape and in the case of dried samples rugged walls apparently without cracks were observed. Core distribution inside the particles was multinuclear and homogeneous and core release was evaluated using anhydrous ethanol. Moist particles crosslinked with glutaraldehyde at the concentration of 1.0 mM.g-1 protein (ptn), were more efficient with respect to the core retention compared to 0.1 mM.g-1 ptn or those crosslinked with transglutaminase (10 U.g-1 ptn). The drying processes had a strong influence on the core release profile reducing the amount released to all dry samples
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
The frequency of persistent atmospheric blocking events in the 40-yr ECMWF Re-Analysis (ERA-40) is compared with the blocking frequency produced by a simple first-order Markov model designed to predict the time evolution of a blocking index [defined by the meridional contrast of potential temperature on the 2-PVU surface (1 PVU ≡ 1 × 10−6 K m2 kg−1 s−1)]. With the observed spatial coherence built into the model, it is able to reproduce the main regions of blocking occurrence and the frequencies of sector blocking very well. This underlines the importance of the climatological background flow in determining the locations of high blocking occurrence as being the regions where the mean midlatitude meridional potential vorticity (PV) gradient is weak. However, when only persistent blocking episodes are considered, the model is unable to simulate the observed frequencies. It is proposed that this persistence beyond that given by a red noise model is due to the self-sustaining nature of the blocking phenomenon.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The commercial process in construction projects is an expensive and highly variable overhead. Collaborative working practices carry many benefits, which are widely disseminated, but little information is available about their costs. Transaction Cost Economics is a theoretical framework that seeks explanations for why there are firms and how the boundaries of firms are defined through the “make-or-buy” decision. However, it is not a framework that offers explanations for the relative costs of procuring construction projects in different ways. The idea that different methods of procurement will have characteristically different costs is tested by way of a survey. The relevance of transaction cost economics to the study of commercial costs in procurement is doubtful. The survey shows that collaborative working methods cost neither more nor less than traditional methods. But the benefits of collaboration mean that there is a great deal of enthusiasm for collaboration rather than competition.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Background: The aim of this study was to evaluate stimulant medication response following a single dose of methylphenidate (MPH) in children and young people with hyperkinetic disorder using infrared motion analysis combined with a continuous performance task (QbTest system) as objective measures. The hypothesis was put forward that a moderate testdose of stimulant medication could determine a robust treatment response, partial response and non-response in relation to activity, attention and impulse control measures. Methods: The study included 44 children and young people between the ages of 7-18 years with a diagnosis of hyperkinetic disorder (F90 & F90.1). A single dose-protocol incorporated the time course effects of both immediate release MPH and extended release MPH (Concerta XL, Equasym XL) to determine comparable peak efficacy periods post intake. Results: A robust treatment response with objective measures reverting to the population mean was found in 37 participants (84%). Three participants (7%) demonstrated a partial response to MPH and four participants (9%) were determined as non-responders due to deteriorating activity measures together with no improvements in attention and impulse control measures. Conclusion: Objective measures provide early into prescribing the opportunity to measure treatment response and monitor adverse reactions to stimulant medication. Most treatment responders demonstrated an effective response to MPH on a moderate testdose facilitating a swift and more optimal titration process.
Resumo:
This paper draws from a wider research programme in the UK undertaken for the Investment Property Forum examining liquidity in commercial property. One aspect of liquidity is the process by which transactions occur including both how properties are selected for sale and the time taken to transact. The paper analyses data from three organisations; a property company, a major financial institution and an asset management company, formally a major public sector pension fund. The data covers three market states and includes sales completed in 1995, 2000 and 2002 in the UK. The research interviewed key individuals within the three organisations to identify any common patterns of activity within the sale process and also identified the timing of 187 actual transactions from inception of the sale to completion. The research developed a taxonomy of the transaction process. Interviews with vendors indicated that decisions to sell were a product of a combination of portfolio, specific property and market based issues. Properties were generally not kept in a “readiness for sale” state. The average time from first decision to sell the actual property to completion had a mean time of 298 days and a median of 190 days. It is concluded that this study may underestimate the true length of the time to transact for two reasons. Firstly, the pre-marketing period is rarely recorded in transaction files. Secondly, and more fundamentally, studies of sold properties may contain selection bias. The research indicated that vendors tended to sell properties which it was perceived could be sold at a ‘fair’ price in a reasonable period of time.
Resumo:
This paper describes the novel use of cluster analysis in the field of industrial process control. The severe multivariable process problems encountered in manufacturing have often led to machine shutdowns, where the need for corrective actions arises in order to resume operation. Production faults which are caused by processes running in less efficient regions may be prevented or diagnosed using a reasoning based on cluster analysis. Indeed the intemal complexity of a production machinery may be depicted in clusters of multidimensional data points which characterise the manufacturing process. The application of a Mean-Tracking cluster algorithm (developed in Reading) to field data acquired from a high-speed machinery will be discussed. The objective of such an application is to illustrate how machine behaviour can be studied, in particular how regions of erroneous and stable running behaviour can be identified.