963 resultados para continuous-time models
Resumo:
We present a new method for lysis of single cells in continuous flow, where cells are sequentially trapped, lysed and released in an automatic process. Using optimized frequencies, dielectrophoretic trapping allows exposing cells in a reproducible way to high electrical fields for long durations, thereby giving good control on the lysis parameters. In situ evaluation of cytosol extraction on single cells has been studied for Chinese hamster ovary (CHO) cells through out-diffusion of fluorescent molecules for different voltage amplitudes. A diffusion model is proposed to correlate this out-diffusion to the total area of the created pores, which is dependent on the potential drop across the cell membrane and enables evaluation of the total pore area in the membrane. The dielectrophoretic trapping is no longer effective after lysis because of the reduced conductivity inside the cells, leading to cell release. The trapping time is linked to the time required for cytosol extraction and can thus provide additional validation of the effective cytosol extraction for non-fluorescent cells. Furthermore, the application of one single voltage for both trapping and lysis provides a fully automatic process including cell trapping, lysis, and release, allowing operating the device in continuous flow without human intervention.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.
Resumo:
Research into the biomechanical manifestation of fatigue during exhaustive runs is increasingly popular but additional understanding of the adaptation of the spring-mass behaviour during the course of strenuous, self-paced exercises continues to be a challenge in order to develop optimized training and injury prevention programs. This study investigated continuous changes in running mechanics and spring-mass behaviour during a 5-km run. 12 competitive triathletes performed a 5-km running time trial (mean performance: 17 min 30 s) on a 200 m indoor track. Vertical and anterior-posterior ground reaction forces were measured every 200 m by a 5-m long force platform system, and used to determine spring-mass model characteristics. After a fast start, running velocity progressively decreased (- 11.6%; P<0.001) in the middle part of the race before an end spurt in the final 400-600 m. Stride length (- 7.4%; P<0.001) and frequency (- 4.1%; P=0.001) decreased over the 25 laps, while contact time (+ 8.9%; P<0.001) and total stride duration (+ 4.1%; P<0.001) progressively lengthened. Peak vertical forces (- 2.0%; P<0.01) and leg compression (- 4.3%; P<0.05), but not centre of mass vertical displacement (+ 3.2%; P>0.05), decreased with time. As a result, vertical stiffness decreased (- 6.0%; P<0.001) during the run, whereas leg stiffness changes were not significant (+ 1.3%; P>0.05). Spring-mass behaviour progressively changes during a 5-km time trial towards deteriorated vertical stiffness, which alters impact and force production characteristics.
Resumo:
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.
Resumo:
This paper compares the forecasting performance of different models which have been proposed for forecasting in the presence of structural breaks. These models differ in their treatment of the break process, the parameters defining the model which applies in each regime and the out-of-sample probability of a break occurring. In an extensive empirical evaluation involving many important macroeconomic time series, we demonstrate the presence of structural breaks and their importance for forecasting in the vast majority of cases. However, we find no single forecasting model consistently works best in the presence of structural breaks. In many cases, the formal modeling of the break process is important in achieving good forecast performance. However, there are also many cases where simple, rolling OLS forecasts perform well.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
This paper considers the lag structures of dynamic models in economics, arguing that the standard approach is too simple to capture the complexity of actual lag structures arising, for example, from production and investment decisions. It is argued that recent (1990s) developments in the the theory of functional differential equations provide a means to analyse models with generalised lag structures. The stability and asymptotic stability of two growth models with generalised lag structures are analysed. The paper concludes with some speculative discussion of time-varying parameters.
Resumo:
Time-inconsistency is an essential feature of many policy problems (Kydland and Prescott, 1977). This paper presents and compares three methods for computing Markov-perfect optimal policies in stochastic nonlinear business cycle models. The methods considered include value function iteration, generalized Euler-equations, and parameterized shadow prices. In the context of a business cycle model in which a scal authority chooses government spending and income taxation optimally, while lacking the ability to commit, we show that the solutions obtained using value function iteration and generalized Euler equations are somewhat more accurate than that obtained using parameterized shadow prices. Among these three methods, we show that value function iteration can be applied easily, even to environments that include a risk-sensitive scal authority and/or inequality constraints on government spending. We show that the risk-sensitive scal authority lowers government spending and income-taxation, reducing the disincentive households face to accumulate wealth.
Resumo:
BACKGROUND AND AIMS: In critically ill patients, fractional hepatic de novo lipogenesis increases in proportion to carbohydrate administration during isoenergetic nutrition. In this study, we sought to determine whether this increase may be the consequence of continuous enteral nutrition and bed rest. We, therefore, measured fractional hepatic de novo lipogenesis in a group of 12 healthy subjects during near-continuous oral feeding (hourly isoenergetic meals with a liquid formula containing 55% carbohydrate). In eight subjects, near-continuous enteral nutrition and bed rest were applied over a 10 h period. In the other four subjects, it was extended to 34 h. Fractional hepatic de novo lipogenesis was measured by infusing(13) C-labeled acetate and monitoring VLDL-(13)C palmitate enrichment with mass isotopomer distribution analysis. Fractional hepatic de novo lipogenesis was 3.2% (range 1.5-7.5%) in the eight subjects after 10 h of near continuous nutrition and 1.6% (range 1.3-2.0%) in the four subjects after 34 h of near-continuous nutrition and bed rest. This indicates that continuous nutrition and physical inactivity do not increase hepatic de novo lipogenesis. Fractional hepatic de novo lipogenesis previously reported in critically ill patients under similar nutritional conditions (9.3%) (range 5.3-15.8%) was markedly higher than in healthy subjects (P<0.001). These data from healthy subjects indicate that fractional hepatic de novo lipogenesis is increased in critically ill patients.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We survey the main theoretical aspects of models for Mobile Ad Hoc Networks (MANETs). We present theoretical characterizations of mobile network structural properties, different dynamic graph models of MANETs, and finally we give detailed summaries of a few selected articles. In particular, we focus on articles dealing with connectivity of mobile networks, and on articles which show that mobility can be used to propagate information between nodes of the network while at the same time maintaining small transmission distances, and thus saving energy.
Resumo:
BACKGROUND: Sunitinib (SU) is a multitargeted tyrosine kinase inhibitor with antitumor and antiangiogenic activity. The objective of this trial was to demonstrate antitumor activity of continuous SU treatment in patients with hepatocellular carcinoma (HCC). PATIENTS AND METHODS: Key eligibility criteria included unresectable or metastatic HCC, no prior systemic anticancer treatment, measurable disease, and Child-Pugh class A or mild Child-Pugh class B liver dysfunction. Patients received 37.5 mg SU daily until progression or unacceptable toxicity. The primary endpoint was progression-free survival at 12 weeks (PFS12). RESULTS: Forty-five patients were enrolled. The median age was 63 years; 89% had Child-Pugh class A disease and 47% had distant metastases. PFS12 was rated successful in 15 patients (33%; 95% confidence interval, 20%-47%). Over the whole trial period, one complete response and a 40% rate of stable disease as the best response were achieved. The median PFS duration, disease stabilization duration, time to progression, and overall survival time were 1.5, 2.9, 1.5, and 9.3 months, respectively. Grade 3 and 4 adverse events were infrequent. None of the 33 deaths were considered drug related. CONCLUSION: Continuous SU treatment with 37.5 mg daily is feasible and has moderate activity in patients with advanced HCC and mild to moderately impaired liver dysfunction. Under this trial design (>13 PFS12 successes), the therapy is considered promising. This is the first trial describing the clinical effects of continuous dosing of SU in HCC patients on a schedule that is used in an ongoing, randomized, phase III trial in comparison with the current treatment standard, sorafenib (ClinicalTrials.gov identifier, NCT00699374).