836 resultados para model-based security management
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual “homing” experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.
Resumo:
The structure of turbulence in the ocean surface layer is investigated using a simplified semi-analytical model based on rapid-distortion theory. In this model, which is linear with respect to the turbulence, the flow comprises a mean Eulerian shear current, the Stokes drift of an irrotational surface wave, which accounts for the irreversible effect of the waves on the turbulence, and the turbulence itself, whose time evolution is calculated. By analysing the equations of motion used in the model, which are linearised versions of the Craik–Leibovich equations containing a ‘vortex force’, it is found that a flow including mean shear and a Stokes drift is formally equivalent to a flow including mean shear and rotation. In particular, Craik and Leibovich’s condition for the linear instability of the first kind of flow is equivalent to Bradshaw’s condition for the linear instability of the second. However, the present study goes beyond linear stability analyses by considering flow disturbances of finite amplitude, which allows calculating turbulence statistics and addressing cases where the linear stability is neutral. Results from the model show that the turbulence displays a structure with a continuous variation of the anisotropy and elongation, ranging from streaky structures, for distortion by shear only, to streamwise vortices resembling Langmuir circulations, for distortion by Stokes drift only. The TKE grows faster for distortion by a shear and a Stokes drift gradient with the same sign (a situation relevant to wind waves), but the turbulence is more isotropic in that case (which is linearly unstable to Langmuir circulations).
Resumo:
Acrylamide is formed from reducing sugars and asparagine during the preparation of French fries. The commercial preparation of French fries is a multistage process involving the preparation of frozen, par-fried potato strips for distribution to catering outlets, where they are finish-fried. The initial blanching, treatment in glucose solution, and par-frying steps are crucial because they determine the levels of precursors present at the beginning of the finish-frying process. To minimize the quantities of acrylamide in cooked fries, it is important to understand the impact of each stage on the formation of acrylamide. Acrylamide, amino acids, sugars, moisture, fat, and color were monitored at time intervals during the frying of potato strips that had been dipped in various concentrations of glucose and fructose during a typical pretreatment. A mathematical model based on the fundamental chemical reaction pathways of the finish-frying was developed, incorporating moisture and temperature gradients in the fries. This showed the contribution of both glucose and fructose to the generation of acrylamide and accurately predicted the acrylamide content of the final fries.
Resumo:
Aerosols affect the Earth's energy budget directly by scattering and absorbing radiation and indirectly by acting as cloud condensation nuclei and, thereby, affecting cloud properties. However, large uncertainties exist in current estimates of aerosol forcing because of incomplete knowledge concerning the distribution and the physical and chemical properties of aerosols as well as aerosol-cloud interactions. In recent years, a great deal of effort has gone into improving measurements and datasets. It is thus feasible to shift the estimates of aerosol forcing from largely model-based to increasingly measurement-based. Our goal is to assess current observational capabilities and identify uncertainties in the aerosol direct forcing through comparisons of different methods with independent sources of uncertainties. Here we assess the aerosol optical depth (τ), direct radiative effect (DRE) by natural and anthropogenic aerosols, and direct climate forcing (DCF) by anthropogenic aerosols, focusing on satellite and ground-based measurements supplemented by global chemical transport model (CTM) simulations. The multi-spectral MODIS measures global distributions of aerosol optical depth (τ) on a daily scale, with a high accuracy of ±0.03±0.05τ over ocean. The annual average τ is about 0.14 over global ocean, of which about 21%±7% is contributed by human activities, as estimated by MODIS fine-mode fraction. The multi-angle MISR derives an annual average AOD of 0.23 over global land with an uncertainty of ~20% or ±0.05. These high-accuracy aerosol products and broadband flux measurements from CERES make it feasible to obtain observational constraints for the aerosol direct effect, especially over global the ocean. A number of measurement-based approaches estimate the clear-sky DRE (on solar radiation) at the top-of-atmosphere (TOA) to be about -5.5±0.2 Wm-2 (median ± standard error from various methods) over the global ocean. Accounting for thin cirrus contamination of the satellite derived aerosol field will reduce the TOA DRE to -5.0 Wm-2. Because of a lack of measurements of aerosol absorption and difficulty in characterizing land surface reflection, estimates of DRE over land and at the ocean surface are currently realized through a combination of satellite retrievals, surface measurements, and model simulations, and are less constrained. Over the oceans the surface DRE is estimated to be -8.8±0.7 Wm-2. Over land, an integration of satellite retrievals and model simulations derives a DRE of -4.9±0.7 Wm-2 and -11.8±1.9 Wm-2 at the TOA and surface, respectively. CTM simulations derive a wide range of DRE estimates that on average are smaller than the measurement-based DRE by about 30-40%, even after accounting for thin cirrus and cloud contamination. A number of issues remain. Current estimates of the aerosol direct effect over land are poorly constrained. Uncertainties of DRE estimates are also larger on regional scales than on a global scale and large discrepancies exist between different approaches. The characterization of aerosol absorption and vertical distribution remains challenging. The aerosol direct effect in the thermal infrared range and in cloudy conditions remains relatively unexplored and quite uncertain, because of a lack of global systematic aerosol vertical profile measurements. A coordinated research strategy needs to be developed for integration and assimilation of satellite measurements into models to constrain model simulations. Enhanced measurement capabilities in the next few years and high-level scientific cooperation will further advance our knowledge.
Resumo:
Purpose – Characteristics of leaders whose behaviour is visceral include taking action based on instinct rather than intellect and exhibiting coarse, base and often negative emotions. Despite the challenge of precisely defining the nature of visceral behaviour, the purpose of this paper is to provide insight into this less attractive side of boardroom life. Design/methodology/approach – Following a literature review of the research into the negative behaviour leaders exhibit, the paper highlights four forms of visceral behaviour based on focused and intimate qualitative case studies involving the experiences of those on the receiving end of that behaviour within a boardroom context. Findings – Based on interviews with an international sample of five chief executive officers (CEOs), plus three subordinates with substantial profit and loss responsibility, the study reveals a distinctly human experience from which no one is exempt. The idiosyncratic nature of the visceral behaviour experienced resulted in each study participant's unique experience. The authors conclude that leaders need to adopt specific measures in order to control and reduce the darker human tendencies. Research limitations/implications – The experiences of study participants are presented in four case studies, providing insight into their experiences whilst also protecting their identity. The study participants were drawn from a sample of companies operating globally within a single sector of the manufacturing industry. The concepts the authors present require validating in other organisations with different demographic profiles. Originality/value – The paper presents a model based on two dimensions – choice and level of mastery – that provides the reader with insight into the forms of visceral behaviour to which leaders succumb. Insight enables us to offer managers strategic suggestions to guard against visceral behaviour and assist them in mitigating its worst aspects, in both those with whom they work and themselves.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
Empirical evidence regarding accrual-based earnings management around mergers and acquisitions has been setting-specific as far as target firms are concerned. This might be due to the fact that target firms cannot always anticipate an acquisition proposal, and thus lack the motive and the time necessary to manage their earnings in order to facilitate or impede the deal. In this paper, we provide clear evidence of downward earnings management by a sample of target firms that have both time and motive to engage in such actions. These are firms that publicly announce their intention to be acquired. Publicly ‘seeking a buyer’ represents a rather unusual corporate event, and we find that these firms engage in downward earnings management in the years surrounding the ‘announcement year’. To some extent, this result is explained by overrepresentation of low performance and growth among these firms, and it can be interpreted under alternative explanations. Furthermore, we show that such downward earnings management negatively affects the probability for a ‘seeking buyer’ firm to secure an acquisition within a reasonable amount of time, a possible indication of efficient diligence by prospective buyers having a preference for firms ‘seeking buyer’ with no informationally obscure earnings.
Resumo:
In this paper we deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate. We consider a cure rate model based on the negative binomial distribution, encompassing as a special case the promotion time cure model. Bayesian analysis is based on Markov chain Monte Carlo (MCMC) methods. We also present some discussion on model selection and an illustration with a real dataset.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.