134 resultados para Modelo linear de corregionalização
Resumo:
A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models
Resumo:
Objectives To investigate whether a sudden temperature change between neighboring days has significant impact on mortality. Methods A Poisson generalized linear regression model combined with a distributed lag non-linear models was used to estimate the association of temperature change between neighboring days with mortality in a subtropical Chinese city during 2008–2012. Temperature change was calculated as the current day’s temperature minus the previous day’s temperature. Results A significant effect of temperature change between neighboring days on mortality was observed. Temperature increase was significantly associated with elevated mortality from non-accidental and cardiovascular diseases, while temperature decrease had a protective effect on non-accidental mortality and cardiovascular mortality. Males and people aged 65 years or older appeared to be more vulnerable to the impact of temperature change. Conclusions Temperature increase between neighboring days has a significant adverse impact on mortality. Further health mitigation strategies as a response to climate change should take into account temperature variation between neighboring days.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
Stability analyses have been widely used to better understand the mechanism of traffic jam formation. In this paper, we consider the impact of cooperative systems (a.k.a. connected vehicles) on traffic dynamics and, more precisely, on flow stability. Cooperative systems are emerging technologies enabling communication between vehicles and/or with the infrastructure. In a distributed communication framework, equipped vehicles are able to send and receive information to/from other equipped vehicles. Here, the effects of cooperative traffic are modeled through a general bilateral multianticipative car-following law that improves cooperative drivers' perception of their surrounding traffic conditions within a given communication range. Linear stability analyses are performed for a broad class of car-following models. They point out different stability conditions in both multianticipative and nonmultianticipative situations. To better understand what happens in unstable conditions, information on the shock wave structure is studied in the weakly nonlinear regime by the mean of the reductive perturbation method. The shock wave equation is obtained for generic car-following models by deriving the Korteweg de Vries equations. We then derive traffic-state-dependent conditions for the sign of the solitary wave (soliton) amplitude. This analytical result is verified through simulations. Simulation results confirm the validity of the speed estimate. The variation of the soliton amplitude as a function of the communication range is provided. The performed linear and weakly nonlinear analyses help justify the potential benefits of vehicle-integrated communication systems and provide new insights supporting the future implementation of cooperative systems.
Resumo:
In this paper we analyse two variants of SIMON family of light-weight block ciphers against variants of linear cryptanalysis and present the best linear cryptanalytic results on these variants of reduced-round SIMON to date. We propose a time-memory trade-off method that finds differential/linear trails for any permutation allowing low Hamming weight differential/linear trails. Our method combines low Hamming weight trails found by the correlation matrix representing the target permutation with heavy Hamming weight trails found using a Mixed Integer Programming model representing the target differential/linear trail. Our method enables us to find a 17-round linear approximation for SIMON-48 which is the best current linear approximation for SIMON-48. Using only the correlation matrix method, we are able to find a 14-round linear approximation for SIMON-32 which is also the current best linear approximation for SIMON-32. The presented linear approximations allow us to mount a 23-round key recovery attack on SIMON-32 and a 24-round Key recovery attack on SIMON-48/96 which are the current best results on SIMON-32 and SIMON-48. In addition we have an attack on 24 rounds of SIMON-32 with marginal complexity.
Resumo:
Two beetle-type scanning tunneling microscopes are described. Both designs have the thermal stability of the Besocke beetle and the simplicity of the Wilms beetle. Moreover, sample holders were designed that also allow both semiconductor wafers and metal single crystals to be studied. The coarse approach is a linear motion of the beetle towards the sample using inertial slip–stick motion. Ten wires are required to control the position of the beetle and scanner and measure the tunneling current. The two beetles were built with different sized piezolegs, and the vibrational properties of both beetles were studied in detail. It was found, in agreement with previous work, that the beetle bending mode is the lowest principal eigenmode. However, in contrast to previous vibrational studies of beetle-type scanning tunneling microscopes, we found that the beetles did not have the “rattling” modes that are thought to arise from the beetle sliding or rocking between surface asperities on the raceway. The mass of our beetles is 3–4 times larger than the mass of beetles where rattling modes have been observed. We conjecture that the mass of our beetles is above a “critical beetle mass.” This is defined to be the beetle mass that attenuates the rattling modes by elastically deforming the contact region to the extent that the rattling modes cannot be identified as distinct modes in cross-coupling measurements.
Resumo:
We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
This paper presents an approach, based on Lean production philosophy, for rationalising the processes involved in the production of specification documents for construction projects. Current construction literature erroneously depicts the process for the creation of construction specifications as a linear one. This traditional understanding of the specification process often culminates in process-wastes. On the contrary, the evidence suggests that though generalised, the activities involved in producing specification documents are nonlinear. Drawing on the outcome of participant observation, this paper presents an optimised approach for representing construction specifications. Consequently, the actors typically involved in producing specification documents are identified, the processes suitable for automation are highlighted and the central role of tacit knowledge is integrated into a conceptual template of construction specifications. By applying the transformation, flow, value (TFV) theory of Lean production the paper argues that value creation can be realised by eliminating the wastes associated with the traditional preparation of specification documents with a view to integrating specifications in digital models such as Building Information Models (BIM). Therefore, the paper presents an approach for rationalising the TFV theory as a method for optimising current approaches for generating construction specifications based on a revised specification writing model.
Resumo:
Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction–diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction–diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction–diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially–confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially–confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit.
Resumo:
Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction—diffusion process on 0
Resumo:
Using polynomial regression and response surface analysis to examine the non-linearity between variables, this study demonstrates that better analytical nuances are required to investigate the relationships between constructs when the underlying theories suggest non-linearity. By utilising the Theory of Planned Behaviour (TPB), Ettlie’s adoption stages as well as employing data gathered from 162 owners of Small and Medium-sized Enterprises (SMEs), our findings reveal that subjective norms and attitude have differing influences upon behavioural intention in both the evaluation and trial stages of the adoption.
Resumo:
Nitrogen fertiliser is a major source of atmospheric N2O and over recent years there is growing evidence for a non-linear, exponential relationship between N fertiliser application rate and N2O emissions. However, there is still high uncertainty around the relationship of N fertiliser rate and N2O emissions for many cropping systems. We conducted year-round measurements of N2O emission and lint yield in four N rate treatments (0, 90, 180 and 270 kg N ha-1) in a cotton-fallow rotation on a black vertosol in Australia. We observed a nonlinear exponential response of N2O emissions to increasing N fertiliser rates with cumulative annual N2O emissions of 0.55 kg N ha-1, 0.67kg N ha-1, 1.07 kg N ha-1 and 1.89 kg N ha-1 for the four respective N fertiliser rates while no N response to yield occurred above 180N. The N fertiliser induced annual N2O EF factors increased from 0.13% to 0.29% and 0.50% for the 90N, 180N and 270N treatments respectively, significantly lower than the IPCC Tier 1 default value (1.0 %). This non-linear response suggests that an exponential N2O emissions model may be more appropriate for use in estimating emission of N2O from soils cultivated to cotton in Australia. It also demonstrates that improved agricultural N management practices can be adopted in cotton to substantially reduce N2O emissions without affecting yield potential.