963 resultados para continuous-time models
                                
Resumo:
Developmental constraints have been postulated to limit the space of feasible phenotypes and thus shape animal evolution. These constraints have been suggested to be the strongest during either early or mid-embryogenesis, which corresponds to the early conservation model or the hourglass model, respectively. Conflicting results have been reported, but in recent studies of animal transcriptomes the hourglass model has been favored. Studies usually report descriptive statistics calculated for all genes over all developmental time points. This introduces dependencies between the sets of compared genes and may lead to biased results. Here we overcome this problem using an alternative modular analysis. We used the Iterative Signature Algorithm to identify distinct modules of genes co-expressed specifically in consecutive stages of zebrafish development. We then performed a detailed comparison of several gene properties between modules, allowing for a less biased and more powerful analysis. Notably, our analysis corroborated the hourglass pattern at the regulatory level, with sequences of regulatory regions being most conserved for genes expressed in mid-development but not at the level of gene sequence, age, or expression, in contrast to some previous studies. The early conservation model was supported with gene duplication and birth that were the most rare for genes expressed in early development. Finally, for all gene properties, we observed the least conservation for genes expressed in late development or adult, consistent with both models. Overall, with the modular approach, we showed that different levels of molecular evolution follow different patterns of developmental constraints. Thus both models are valid, but with respect to different genomic features.
                                
Resumo:
Transient high-grade bacteremia following invasive procedures carries a risk of infective endocarditis (IE). This is supported by experimental endocarditis. On the other hand, case-control studies showed that IE could be caused by cumulative exposure to low-grade bacteremia occurring during daily activities. However, no experimental demonstration of this latter possibility exists. This study investigated the infectivity in animals of continuous low-grade bacteremia compared to that of brief high-grade bacteremia. Rats with aortic vegetations were inoculated with Streptococcus intermedius, Streptococcus gordonii or Staphylococcus aureus (strains Newman and P8). Animals were challenged with 10(3) to 10(6) CFU. Identical bacterial numbers were given by bolus (1 ml in 1 min) or continuous infusion (0.0017 ml/min over 10 h). Bacteremia was 50 to 1,000 times greater after bolus than during continuous inoculation. Streptococcal bolus inoculation of 10(5) CFU infected 63 to 100% vegetations compared to 30 to 71% infection after continuous infusion (P > 0.05). When increasing the inoculum to 10(6) CFU, bolus inoculation infected 100% vegetations and continuous infusion 70 to 100% (P > 0.05). S. aureus bolus injection of 10(3) CFU infected 46 to 57% valves. This was similar to the 53 to 57% infection rates produced by continuous infusion (P > 0.05). Inoculation of 10(4) CFU of S. aureus infected 80 to 100% vegetations after bolus and 60 to 75% after continuous infusion (P > 0.05). These results show that high-level bacteremia is not required to induce experimental endocarditis and support the hypothesis that cumulative exposure to low-grade bacteremia represents a genuine risk of IE in humans.
                                
Resumo:
BACKGROUND: Guidelines for the management of anaemia in patients with chronic kidney disease (CKD) recommend a minimal haemoglobin (Hb) target of 11 g/dL. Recent surveys indicate that this requirement is not met in many patients in Europe. In most studies, Hb is only assessed over a short-term period. The aim of this study was to examine the control of anaemia over a continuous long-term period in Switzerland. METHODS: A prospective multi-centre observational study was conducted in dialysed patients treated with recombinant human epoetin (EPO) beta, over a one-year follow-up period, with monthly assessments of anaemia parameters. RESULTS: Three hundred and fifty patients from 27 centres, representing 14% of the dialysis population in Switzerland, were included. Mean Hb was 11.9 +/- 1.0 g/dL, and remained stable over time. Eighty-five % of the patients achieved mean Hb >or= 11 g/dL. Mean EPO dose was 155 +/- 118 IU/kg/week, being delivered mostly by subcutaneous route (64-71%). Mean serum ferritin and transferrin saturation were 435 +/- 253 microg/L and 30 +/- 11%, respectively. At month 12, adequate iron stores were found in 72.5% of patients, whereas absolute and functional iron deficiencies were observed in only 5.1% and 17.8%, respectively. Multivariate analysis showed that diabetes unexpectedly influenced Hb towards higher levels (12.1 +/- 0.9 g/dL; p = 0.02). One year survival was significantly higher in patients with Hb >or= 11 g/dL than in those with Hb <11 g/dL (19.7% vs 7.3%, p = 0.006). CONCLUSION: In comparison to European studies of reference, this survey shows a remarkable and continuous control of anaemia in Swiss dialysis centres. These results were reached through moderately high EPO doses, mostly given subcutaneously, and careful iron therapy management.
                                
                                
Resumo:
The aim of this paper is to analyse how learning assessment, particularly the Continuous Assessment system, has been defined in the Public Administration and Management Diploma Course of the University of Barcelona (Spain). This course was a pioneering experiment at this university in implementing the guidelines of the European Higher Education Area (EHEA), and thus represents a good case study for verifying whether one of the cornerstones of the EHEA has been accomplished with success. Using data obtained from the Teaching Plans elaborated by the lecturers of each subject, we are able to establish that the CA system has been progressively accepted to such an extent that it is now the assessment formula used by practically all of the lecturers, conforming in this way to the protocols laid down by the Faculty of Law in which this diploma course is taught. Nevertheless, we find that high dispersion exists in how Continuous Assessment is actually defined. Indeed, it seems that there is no unified view of how Continuous Assessment should be performed. This dispersion, however, seems to diminish over time and raises some questions about the advisability of agreement on criteria, considering the potential which CA has as a pedagogical tool. Moreover, we find that the Unique Assessment system, which students may also apply for, is an option chosen only by a minority, with lecturers usually defining it as merely a theoretical and/or practical test, of little innovation in relation to traditional tests.
                                
Resumo:
A recent study of a pair of sympatric species of cichlids in Lake Apoyo in Nicaragua is viewed as providing probably one of the most convincing examples of sympatric speciation to date. Here, we describe and study a stochastic, individual-based, explicit genetic model tailored for this cichlid system. Our results show that relatively rapid (<20,000 generations) colonization of a new ecological niche and (sympatric or parapatric) speciation via local adaptation and divergence in habitat and mating preferences are theoretically plausible if: (i) the number of loci underlying the traits controlling local adaptation, and habitat and mating preferences is small; (ii) the strength of selection for local adaptation is intermediate; (iii) the carrying capacity of the population is intermediate; and (iv) the effects of the loci influencing nonrandom mating are strong. We discuss patterns and timescales of ecological speciation identified by our model, and we highlight important parameters and features that need to be studied empirically to provide information that can be used to improve the biological realism and power of mathematical models of ecological speciation.
                                
Resumo:
Background and objective: Oral anti-cancer treatments have expanded rapidly over the last years. While taking oral tablets at home ensures a better quality of life, it also exposes patients to the risk of sub-optimal adherence. The objective of this study is to assess how well ambulatory cancer patients execute their prescribed dosing regimen while they are engaged with continuous anti-cancer treatments. Design: This is an on-going longitudinal study. Consecutive patients starting an oral treatment are proposed to enter the study by the oncologist. Then they are referred to the pharmacy, where their oral anticancer treatment is dispensed in a Medication Event Monitoring System (MEMSTM), which records date and time of each opening of the drug container. Electronically compiled dosing history data from the MEMS are summarized and used as feedback during semistructured interviews with the pharmacist, which are dedicated to prevention and management of side effects. Interviews are scheduled before each medical visit. Report of the interview is available to the oncologist via an on-line secured portal. Setting: Seamless care approach between a Multidisciplinary Oncology Center and the Pharmacy of an Ambulatory Care and Community Medicine Department. Main outcome measures: For each patient, the comparison between the electronically compiled dosing history and the prescribed regimen was summarized using a daily binary indicator indicating whether yes or no the patient has taken the medication as prescribed. Results: Study started in March 2008. Among 22 eligible patients, 19 were included (11 men, median age 63 years old) and 3 (14%) refused to participate. 15 patients were prescribed a QD regimen, 3 patients a BID and 1 patient switched from QD to BID during follow-up. Median follow up was 182 days (IQR 72-252). Early discontinuation happened in four patients: side effects (n = 1), psychiatric reasons (n = 1), cancer progression (n = 1) and death (n = 1). On average, the daily number of medications was taken as prescribed in 99% of the follow-up days. Conclusions: Execution of the prescribed dosing regimens was almost perfect during the first 6 months. Maintaining this high degree of regimen execution and persistence over time might however be challenging in this population and need therefore to be confirmed in larger and longer follow-up cohort studies.
                                
Resumo:
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.
                                
Resumo:
The aim of this paper is to analyse how learning assessment, particularly the Continuous Assessment system, has been defined in the Public Administration and Management Diploma Course of the University of Barcelona (Spain). This course was a pioneering experiment at this university in implementing the guidelines of the European Higher Education Area (EHEA), and thus represents a good case study for verifying whether one of the cornerstones of the EHEA has been accomplished with success. Using data obtained from the Teaching Plans elaborated by the lecturers of each subject, we are able to establish that the CA system has been progressively accepted to such an extent that it is now the assessment formula used by practically all of the lecturers, conforming in this way to the protocols laid down by the Faculty of Law in which this diploma course is taught. Nevertheless, we find that high dispersion exists in how Continuous Assessment is actually defined. Indeed, it seems that there is no unified view of how Continuous Assessment should be performed. This dispersion, however, seems to diminish over time and raises some questions about the advisability of agreement on criteria, considering the potential which CA has as a pedagogical tool. Moreover, we find that the Unique Assessment system, which students may also apply for, is an option chosen only by a minority, with lecturers usually defining it as merely a theoretical and/or practical test, of little innovation in relation to traditional tests.
                                
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
                                
Resumo:
Work-related flow is defined as a sudden and enjoyable merging of action and awareness that represents a peak experience in the daily lives of workers. Employees" perceptions of challenge and skill and their subjective experiences in terms of enjoyment, interest and absorption were measured using the experience sampling method, yielding a total of 6981 observations from a sample of 60 employees. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes. According to the R2, AICc and BIC indexes, the nonlinear dynamical systems model (i.e. cusp catastrophe model) fit the data better than the linear and logistic regression models. Likewise, the cusp catastrophe model appears to be especially powerful for modelling those cases of high levels of flow. Overall, flow represents a nonequilibrium condition that combines continuous and abrupt changes across time. Research and intervention efforts concerned with this process should focus on the variable of challenge, which, according to our study, appears to play a key role in the abrupt changes observed in work-related flow.
                                
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
                                
Resumo:
Flood effectiveness observations imply that two families of processes describe the formation of debris flow volume. One is related to the rainfall?erosion relationship, and can be seen as a gradual process, and one is related to additional geological/geotechnical events, those named hereafter extraordinary events. In order to discuss the hypothesis of coexistence of two modes of volume formation, some methodologies are applied. Firstly, classical approaches consisting in relating volume to catchments characteristics are considered. These approaches raise questions about the quality of the data rather than providing answers concerning the controlling processes. Secondly, we consider statistical approaches (cumulative number of events distribution and cluster analysis) and these suggest the possibility of having two distinct families of processes. However the quantitative evaluation of the threshold differs from the one that could be obtained from the first approach, but they all agree in the sense of the coexistence of two families of events. Thirdly, a conceptual model is built exploring how and why debris flow volume in alpine catchments changes with time. Depending on the initial condition (sediment production), the model shows that large debris flows (i.e. with important volume) are observed in the beginning period, before a steady-state is reached. During this second period debris flow volume such as is observed in the beginning period is not observed again. Integrating the results of the three approaches, two case studies are presented showing: (1) the possibility to observe in a catchment large volumes that will never happen again due to a drastic decrease in the sediment availability, supporting its difference from gradual erosion processes; (2) that following a rejuvenation of the sediment storage (by a rock avalanche) the magnitude?frequency relationship of a torrent can be differentiated into two phases, the beginning one with large and frequent debris flow and a later one with debris flow less intense and frequent, supporting the results of the conceptual model. Although the results obtained cannot identify a clear threshold between the two families of processes, they show that some debris flows can be seen as pulse of sediment differing from that expected from gradual erosion.
                                
Resumo:
A water reducing and retarding type admixture in concrete is commonly used on continuous bridge deck pours in Iowa. The concrete placed in the negative moment areas must remain plastic until all the dead load deflection due to the new deck's weight occurs. If the concrete does not remain plastic until the total deflection has occurred, structural cracks will develop in these areas. Retarding type admixtures will delay the setting time of concrete and prevent structural cracks if added in the proper amounts. In Section 2412.02 of the Standard Specifications, 1972, Iowa State Highway Commission, it states, "The admixture shall be used in amounts recommended by the manufacturer for conditions which prevail on the project and as approved by the engineer." The conditions which prevail on the project depend on temperature, humidity, wind conditions, etc. Each of these factors will affect the setting rate of the plastic concrete. The purpose of this project is to provide data that will be useful to field personnel concerning the retardation of concrete setting times, and how the of sets will vary with different addition rates and curing temperatures holding all other atmospheric variables constant.
                                
Resumo:
There is a wide range of evidence to suggest that permeability can be constrained through of induced polarization measurements. For clean sands and sandstones, current mechanistic models of induced polarization predict a relationship between the low-frequency time constant inferred from induced polarization measurements and the grain diameter. A number of observations do, however, disagree with this and indicate that the observed relaxation behavior is rather governed by the so-called dynamic pore radius L. To test this hypothesis, we have developed a set of new scaling relationships, which allow the relaxation time to be computed from the pore size and the permeability to be computed from both the Cole-Cole time constant and the formation factor. Moreover, these new scaling relationships can be also used to predict the dependence of the Cole-Cole time constant as a function of the water saturation under unsaturated conditions. Comparative tests of the proposed new relationships with regard to various published experimental results for saturated clean sands and sandstones as well as for partially saturated clean sandstones, do indeed confirm that the dynamic pore radius L is a much more reliable indicator of the observed relaxation behavior than grain-size-based models.
 
                    