906 resultados para Timed and Probabilistic Automata


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The post-genomic era has brought new challenges regarding the understanding of the organization and function of the human genome. Many of these challenges are centered on the meaning of differential gene regulation under distinct biological conditions and can be performed by analyzing the Multiple Differential Expression (MDE) of genes associated with normal and abnormal biological processes. Currently MDE analyses are limited to usual methods of differential expression initially designed for paired analysis. Results: We proposed a web platform named ProbFAST for MDE analysis which uses Bayesian inference to identify key genes that are intuitively prioritized by means of probabilities. A simulated study revealed that our method gives a better performance when compared to other approaches and when applied to public expression data, we demonstrated its flexibility to obtain relevant genes biologically associated with normal and abnormal biological processes. Conclusions: ProbFAST is a free accessible web-based application that enables MDE analysis on a global scale. It offers an efficient methodological approach for MDE analysis of a set of genes that are turned on and off related to functional information during the evolution of a tumor or tissue differentiation. ProbFAST server can be accessed at http://gdm.fmrp.usp.br/probfast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stavskaya's model is a one-dimensional probabilistic cellular automaton (PCA) introduced in the end of the 1960s as an example of a model displaying a nonequilibrium phase transition. Although its absorbing state phase transition is well understood nowadays, the model never received a full numerical treatment to investigate its critical behavior. In this Brief Report we characterize the critical behavior of Stavskaya's PCA by means of Monte Carlo simulations and finite-size scaling analysis. The critical exponents of the model are calculated and indicate that its phase transition belongs to the directed percolation universality class of critical behavior, as would be expected on the basis of the directed percolation conjecture. We also explicitly establish the relationship of the model with the Domany-Kinzel PCA on its directed site percolation line, a connection that seems to have gone unnoticed in the literature so far.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atomic clouds prepared in ""timed Dicke"" states, i.e. states where the phase of the oscillating atomic dipole moments linearly varies along one direction of space, are efficient sources of superradiant light emission [Scully et al., Phys. Rev. Lett. 96, 010501 (2006)]. Here, we show that, in contrast to previous assertions, timed Dicke states are not the states automatically generated by incident laser light. In reality, the atoms act back on the driving field because of the finite refraction of the cloud. This leads to nonuniform phase shifts, which, at higher optical densities, dramatically alter the cooperative scattering properties, as we show by explicit calculation of macroscopic observables, such as the radiation pressure force.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pinto, ALS, Oliveira, NC, Gualano, B, Christmann, RB, Painelli, VS, Artioli, GG, Prado, DML, and Lima, FR. Efficacy and safety of concurrent training in systemic sclerosis. J Strength Cond Res 25(5): 1423-1428, 2011-The optimal training model for patients with systemic sclerosis (SSc) is unknown. In this study, we aimed to investigate the effects of a 12-week combined resistance and aerobic training program (concurrent training) in SSc patients. Eleven patients with no evidence of pulmonary involvement were recruited for the exercise program. Lower and upper limb dynamic strengths (assessed by 1 repetition maximum [1RM] of a leg press and bench press, respectively), isometric strength (assessed by back pull and handgrip tests), balance and mobility (assessed by the timed up-and-go test), muscle function (assessed by the timed-stands test), Rodnan score, digital ulcers, Rayland`s phenomenon, and blood markers of muscle inflammation (creatine kinase and aldolase) were assessed at baseline and after the 12-week program. Exercise training significantly enhanced the 1RM leg press (41%) and 1RM bench press (13%) values and back pull (24%) and handgrip strength (11%). Muscle function was also improved (15%), but balance and mobility were not significantly changed. The time-to-exhaustion was increased (46.5%, p = 0.0004), the heart rate at rest condition was significantly reduced, and the workload and time of exercise at ventilatory thresholds and peak of exercise were increased. However, maximal and submaximal (V)over dotO(2) were unaltered (p > 0.05). The Rodnan score was unchanged, and muscle enzymes remained within normal levels. No change was observed in digital ulcers and Raynaud`s phenomenon. This is the first study to demonstrate that a 12-week concurrent training program is safe and substantially improves muscle strength, function, and aerobic capacity in SSc patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates probabilistic logics endowed with independence relations. We review propositional probabilistic languages without and with independence. We then consider graph-theoretic representations for propositional probabilistic logic with independence; complexity is analyzed, algorithms are derived, and examples are discussed. Finally, we examine a restricted first-order probabilistic logic that generalizes relational Bayesian networks. (c) 2007 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to examine the reliability and validity of field tests for assessing physical function in mid-aged and young-old people (55–70 y). Tests were selected that required minimal space and equipment and could be implemented in multiple field settings such as a general practitioner's office. Nineteen participants completed 2 field and 1 laboratory testing sessions. Intra-class correlations showed good reliability for the tests of upper body strength (lift and reach, R= .66), lower body strength (sit to stand, R= .80) and functional capacity (Canadian Step Test, R= .92), but not for leg power (single timed chair rise, R= .28). There was also good reliability for the balance test during 3 stances: parallel (94.7% agreement), semi-tandem (73.7%), and tandem (52.6%). Comparison of field test results with objective laboratory measures found good validity for the sit to stand (cf 1RM leg press, Pearson r= .68, p< .05), and for the step test (cf PWC140, r= −.60, p< .001), but not for the lift and reach (cf 1RM bench press, r= .43, p> .05), balance (r= −.13, −.18, .23) and rate of force development tests (r= −.28). It was concluded that the lower body strength and cardiovascular function tests were appropriate for use in field settings with mid-aged and young-old adults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Action systems are a construct for reasoning about concurrent, reactive systems, in which concurrent behaviour is described by interleaving atomic actions. Sere and Troubitsyna have proposed an extension to action systems in which actions may be expressed and composed using discrete probabilistic choice as well as demonic nondeterministic choice. In this paper we develop a trace-based semantics for probabilistic action systems. This semantics provides a simple theoretical base on which practical refinement rules for probabilistic action systems may be justified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard tools for the analysis of economic problems involving uncertainty, including risk premiums, certainty equivalents and the notions of absolute and relative risk aversion, are developed without making specific assumptions on functional form beyond the basic requirements of monotonicity, transitivity, continuity, and the presumption that individuals prefer certainty to risk. Individuals are not required to display probabilistic sophistication. The approach relies on the distance and benefit functions to characterize preferences relative to a given state-contingent vector of outcomes. The distance and benefit functions are used to derive absolute and relative risk premiums and to characterize preferences exhibiting constant absolute risk aversion (CARA) and constant relative risk aversion (CRRA). A generalization of the notion of Schur-concavity is presented. If preferences are generalized Schur concave, the absolute and relative risk premiums are generalized Schur convex, and the certainty equivalents are generalized Schur concave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypokinetic movement can be greatly improved in Parkinson's disease patients by the provision of external cues to guide movement. It has recently been reported, however, that movement performance in parkinsonian patients can be similarly improved in the absence of external cues by using attentional strategies, whereby patients are instructed to consciously attend to particular aspects of the movement which would normally be controlled automatically. To study the neurophysiological basis of such improvements in performance associated with the use of attentional strategies, movement-related cortical potentials were examined in Parkinson's disease and control subjects using a reaction time paradigm. One group of subjects were explicitly instructed to concentrate on internally timed responses to anticipate the presentation of a predictably timed go signal. Other subjects were given no such instruction regarding attentional strategies. Early-stage premovement activity of movement-related potentials was significantly increased in amplitude and reaction times were significantly faster for Parkinson's disease subjects when instructed to direct their attention toward internally generating responses rather than relying on external cues. It is therefore suggested that the use of attentional strategies may allow movement to be mediated by less automatic and more conscious attentional motor control processes which may be less impaired by basal ganglia dysfunction, and thereby improve movement performance in Parkinson's disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the internal dynamics of two cellular automaton models with heterogeneous strength fields and differing nearest neighbour laws. One model is a crack-like automaton, transferring ail stress from a rupture zone to the surroundings. The other automaton is a partial stress drop automaton, transferring only a fraction of the stress within a rupture zone to the surroundings. To study evolution of stress, the mean spectral density. f(k(r)) of a stress deficit held is: examined prior to, and immediately following ruptures in both models. Both models display a power-law relationship between f(k(r)) and spatial wavenumber (k(r)) of the form f(k(r)) similar tok(r)(-beta). In the crack model, the evolution of stress deficit is consistent with cyclic approach to, and retreat from a critical state in which large events occur. The approach to criticality is driven by tectonic loading. Short-range stress transfer in the model does not affect the approach to criticality of broad regions in the model. The evolution of stress deficit in the partial stress drop model is consistent with small fluctuations about a mean state of high stress, behaviour indicative of a self-organised critical system. Despite statistics similar to natural earthquakes these simplified models lack a physical basis. physically motivated models of earthquakes also display dynamical complexity similar to that of a critical point system. Studies of dynamical complexity in physical models of earthquakes may lead to advancement towards a physical theory for earthquakes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of event time and size statistics in two heterogeneous cellular automaton models of earthquake behavior are studied and compared to the evolution of these quantities during observed periods of accelerating seismic energy release Drier to large earthquakes. The two automata have different nearest neighbor laws, one of which produces self-organized critical (SOC) behavior (PSD model) and the other which produces quasi-periodic large events (crack model). In the PSD model periods of accelerating energy release before large events are rare. In the crack model, many large events are preceded by periods of accelerating energy release. When compared to randomized event catalogs, accelerating energy release before large events occurs more often than random in the crack model but less often than random in the PSD model; it is easier to tell the crack and PSD model results apart from each other than to tell either model apart from a random catalog. The evolution of event sizes during the accelerating energy release sequences in all models is compared to that of observed sequences. The accelerating energy release sequences in the crack model consist of an increase in the rate of events of all sizes, consistent with observations from a small number of natural cases, however inconsistent with a larger number of cases in which there is an increase in the rate of only moderate-sized events. On average, no increase in the rate of events of any size is seen before large events in the PSD model.