886 resultados para Multiple state models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mortality rate of older patients with intertrochanteric fractures has been increasing with the aging of populations in China. The purpose of this study was: 1) to develop an artificial neural network (ANN) using clinical information to predict the 1-year mortality of elderly patients with intertrochanteric fractures, and 2) to compare the ANN's predictive ability with that of logistic regression models. The ANN model was tested against actual outcomes of an intertrochanteric femoral fracture database in China. The ANN model was generated with eight clinical inputs and a single output. ANN's performance was compared with a logistic regression model created with the same inputs in terms of accuracy, sensitivity, specificity, and discriminability. The study population was composed of 2150 patients (679 males and 1471 females): 1432 in the training group and 718 new patients in the testing group. The ANN model that had eight neurons in the hidden layer had the highest accuracies among the four ANN models: 92.46 and 85.79% in both training and testing datasets, respectively. The areas under the receiver operating characteristic curves of the automatically selected ANN model for both datasets were 0.901 (95%CI=0.814-0.988) and 0.869 (95%CI=0.748-0.990), higher than the 0.745 (95%CI=0.612-0.879) and 0.728 (95%CI=0.595-0.862) of the logistic regression model. The ANN model can be used for predicting 1-year mortality in elderly patients with intertrochanteric fractures. It outperformed a logistic regression on multiple performance measures when given the same variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ontology matching is an important task when data from multiple data sources is integrated. Problems of ontology matching have been studied widely in the researchliterature and many different solutions and approaches have been proposed alsoin commercial software tools. In this survey, well-known approaches of ontologymatching, and its subtype schema matching, are reviewed and compared. The aimof this report is to summarize the knowledge about the state-of-the-art solutionsfrom the research literature, discuss how the methods work on different application domains, and analyze pros and cons of different open source and academic tools inthe commercial world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distribution of excitation energy between the two photosystems (PSII and PSI) of photosynthesis is regulated by the light state transition. Three models have been proposed for the mechanism of the state transition in phycobilisome (PBS) containing organisms, two involving protein phosphorylation. A procedure for the rapid isolation of thylakoid membranes and PBS fractions from the cyanobacterium Synechococcus m. PCC 6301 in light state 1 and light state 2 was developed. The phosphorylation of thylakoid and soluble proteins rapidly isolated from intact cells in state 1 and state 2 was investigated. 77 K fluorescence emission spectra revealed that rapidly isolated thylakoid membranes retained the excitation energy distribution characteristic of intact cells in state 1 and state 2. Phosphoproteins were identified by gel electrophoresis of both thylakoid membrane and phycobilisome fractions isolated from cells labelled with 32p orthophosphate. The results showed very close phosphoprotein patterns for either thylakoid membrane or PBS fractions in state 1 and state 2. These results do not support proposed models for the state transition which required phosphorylation of PBS or thylakoid membrane proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Photosynthetic state transitions were investigated in the cyanobacterium Synechococcus sp. PCC 7002 in both wild-type cells and mutant cells lacking phycobilisomes. Preillumination in the presence of DCMU (3(3,4 dichlorophenyl) 1,1 dimethyl urea) induced state 1 and dark adaptation induced state 2 in both wild-type and mutant cells as determined by 77K fluorescence emission spectroscopy. Light-induced transitions were observed in the wildtype after preferential excitation of phycocyanin (state 2) or preferential excitation of chlorophyll .a. (state 1). The state 1 and 2 transitions in the wild-type had half-times of approximately 10 seconds. Cytochrome f and P-700 oxidation kinetics could not be correlated with any current state transition model as cells in state 1 showed faster oxidation kinetics regardless of excitation wavelength. Light-induced transitions were also observed in the phycobilisomeless mutant after preferential excitation of short wavelength chlorophyll !l. (state 2) or carotenoids and long wavelength chlorophyll it (state 1). One-dimensional electrophoresis revealed no significant differences in phosphorylation patterns of resolved proteins between wild-type cells in state 1 and state 2. It is concluded that the mechanism of the light state transition in cyanobacteria does not require the presence of the phycobilisome. The results contradict proposed models for the state transition which require an active role for the phycobilisome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relation algebras is one of the state-of-the-art means used by mathematicians and computer scientists for solving very complex problems. As a result, a computer algebra system for relation algebras called RelView has been developed at Kiel University. RelView works within the standard model of relation algebras. On the other hand, relation algebras do have other models which may have different properties. For example, in the standard model we always have L;L=L (the composition of two (heterogeneous) universal relations yields a universal relation). This is not true in some non-standard models. Therefore, any example in RelView will always satisfy this property even though it is not true in general. On the other hand, it has been shown that every relation algebra with relational sums and subobjects can be seen as matrix algebra similar to the correspondence of binary relations between sets and Boolean matrices. The aim of my research is to develop a new system that works with both standard and non-standard models for arbitrary relations using multiple-valued decision diagrams (MDDs). This system will implement relations as matrix algebras. The proposed structure is a library written in C which can be imported by other languages such as Java or Haskell.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.