421 resultados para initialisation flaws
Resumo:
Password Authentication Protocol (PAP) is widely used in the Wireless Fidelity Point-to-Point Protocol to authenticate an identity and password for a peer. This paper uses a new knowledge-based framework to verify the PAP protocol and a fixed version. Flaws are found in both the original and the fixed versions. A new enhanced protocol is provided and the security of it is proved The whole process is implemented in a mechanical reasoning platform, Isabelle. It only takes a few seconds to find flaws in the original and the fixed protocol and to verify that the enhanced version of the PAP protocol is secure.
Resumo:
Hidden Markov Models (HMMs) have been successfully applied to different modelling and classification problems from different areas over the recent years. An important step in using HMMs is the initialisation of the parameters of the model as the subsequent learning of HMM’s parameters will be dependent on these values. This initialisation should take into account the knowledge about the addressed problem and also optimisation techniques to estimate the best initial parameters given a cost function, and consequently, to estimate the best log-likelihood. This paper proposes the initialisation of Hidden Markov Models parameters using the optimisation algorithm Differential Evolution with the aim to obtain the best log-likelihood.
Resumo:
This paper examines aspects of the case against global oil peaking, and in particular sets out to answer a viewpoint that the world can have abundant supplies of oil "for years to come". Arguments supporting the latter view include: past forecasts of oil shortage have proved incorrect, so current predictions should also be discounted; many modellers depend on Hubbert's analysis but this contained fundamental flaws; new oil supply will result from reserves growth and from the wider deployment of advanced extraction technology; and that the world contains large resources of unconventional oil that can come on-stream if the production of conventional oil declines. These arguments are examined in turn and shown to be incorrect, or to need setting into a broader context. The paper concludes therefore that such arguments cannot be used to rule out calculations that the resource-limited peak in the world's production of conventional oil will occur in the near term. Moreover, peaking of conventional oil is likely to impact the world's total availability of oil where the latter includes non-conventional oil and oil substitutes. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this article a simple and effective algorithm is introduced for the system identification of the Wiener system using observational input/output data. The nonlinear static function in the Wiener system is modelled using a B-spline neural network. The Gauss–Newton algorithm is combined with De Boor algorithm (both curve and the first order derivatives) for the parameter estimation of the Wiener model, together with the use of a parameter initialisation scheme. Numerical examples are utilised to demonstrate the efficacy of the proposed approach.
Resumo:
We develop a complex-valued (CV) B-spline neural network approach for efficient identification and inversion of CV Wiener systems. The CV nonlinear static function in the Wiener system is represented using the tensor product of two univariate B-spline neural networks. With the aid of a least squares parameter initialisation, the Gauss-Newton algorithm effectively estimates the model parameters that include the CV linear dynamic model coefficients and B-spline neural network weights. The identification algorithm naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. An accurate inverse of the CV Wiener system is then obtained, in which the inverse of the CV nonlinear static function of the Wiener system is calculated efficiently using the Gaussian-Newton algorithm based on the estimated B-spline neural network model, with the aid of the De Boor recursions. The effectiveness of our approach for identification and inversion of CV Wiener systems is demonstrated using the application of digital predistorter design for high power amplifiers with memory
Resumo:
What impact do international state-building missions have on the domestic politics of states they seek to build, and how can we measure this impact with confidence? This article seeks to address these questions and challenge some existing approaches that often appear to assume that state-builders leave lasting legacies rather than demonstrating such influence with the use of carefully chosen empirical evidence. Too often, domestic conditions that follow in the wake of international state-building are assumed to follow as a result of international intervention, usually due to insufficient attention to the causal processes that link international actions to domestic outcomes. The article calls for greater appreciation of the methodological challenges to establishing causal inferences regarding the legacies of state-building and identifies three qualitative methodological strategies—process tracing, counterfactual analysis, and the use of control cases—that can be used to improve confidence in causal claims about state-building legacies. The article concludes with a case study of international state-building in East Timor, highlighting several flaws of existing evaluations of the United Nations' role in East Timor and identifying the critical role that domestic actors play even in the context of authoritative international intervention
Resumo:
The research will explore views on inclusive design policy implementation and learning strategy used in practice by Local Authorities’ planning, building control and policy departments in England. It reports emerging research findings. The research aim was developed from an extensive literature review, and informed by a pilot study with relevant Local Authority departments. The pilot study highlighted gaps within the process of policy implementation, a lack of awareness of the process and flaws in the design guidance policy. This has helped inform the development of a robust research design using both a survey and semi-structured interviews. The questionnaire targeted key employees within Local Authorities designed to establish how employees learn about inclusive design policy and to determine their views on current approaches of inclusive design policy implementation adopted by their Local Authorities. The questionnaire produces 117 responses. Interestingly approximately 9 out of 129 Local Authorities approached claimed that they were unable to participate either because an inclusive design policy was not adopted or they were faced with a high workload and thus unable to take part. An emerging finding is a lack of understanding of inclusive design problems, which may lead to problem with inclusive design policy implementation, and thus adversely affect how the built environment can be experienced. There is a strong indication from the survey respondents indicating that they are most likely to learn about inclusive design from policy guides produced by their Local Authorities and from their colleagues.
Resumo:
Sea ice contains flaws including frictional contacts. We aim to describe quantitatively the mechanics of those contacts, providing local physics for geophysical models. With a focus on the internal friction of ice, we review standard micro-mechanical models of friction. The solid's deformation under normal load may be ductile or elastic. The shear failure of the contact may be by ductile flow, brittle fracture, or melting and hydrodynamic lubrication. Combinations of these give a total of six rheological models. When the material under study is ice, several of the rheological parameters in the standard models are not constant, but depend on the temperature of the bulk, on the normal stress under which samples are pressed together, or on the sliding velocity and acceleration. This has the effect of making the shear stress required for sliding dependent on sliding velocity, acceleration, and temperature. In some cases, it also perturbs the exponent in the normal-stress dependence of that shear stress away from the value that applies to most materials. We unify the models by a principle of maximum displacement for normal deformation, and of minimum stress for shear failure, reducing the controversy over the mechanism of internal friction in ice to the choice of values of four parameters in a single model. The four parameters represent, for a typical asperity contact, the sliding distance required to expel melt-water, the sliding distance required to break contact, the normal strain in the asperity, and the thickness of any ductile shear zone.
Resumo:
We present a methodology that allows a sea ice rheology, suitable for use in a General Circulation Model (GCM), to be determined from laboratory and tank experiments on sea ice when combined with a kinematic model of deformation. The laboratory experiments determine a material rheology for sea ice, and would investigate a nonlinear friction law of the form τ ∝ σ n⅔, instead of the more familiar Amonton's law, τ = μσn (τ is the shear stress, μ is the coefficient of friction and σ n is the normal stress). The modelling approach considers a representative region R containing ice floes (or floe aggregates), separated by flaws. The deformation of R is imposed and the motion of the floes determined using a kinematic model, which will be motivated from SAR observations. Deformation of the flaws is inferred from the floe motion and stress determined from the material rheology. The stress over R is then determined from the area-weighted contribution from flaws and floes
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
Urban land surface models (LSM) are commonly evaluated for short periods (a few weeks to months) because of limited observational data. This makes it difficult to distinguish the impact of initial conditions on model performance or to consider the response of a model to a range of possible atmospheric conditions. Drawing on results from the first urban LSM comparison, these two issues are considered. Assessment shows that the initial soil moisture has a substantial impact on the performance. Models initialised with soils that are too dry are not able to adjust their surface sensible and latent heat fluxes to realistic values until there is sufficient rainfall. Models initialised with too wet soils are not able to restrict their evaporation appropriately for periods in excess of a year. This has implications for short term evaluation studies and implies the need for soil moisture measurements to improve data assimilation and model initialisation. In contrast, initial conditions influencing the thermal storage have a much shorter adjustment timescale compared to soil moisture. Most models partition too much of the radiative energy at the surface into the sensible heat flux at the probable expense of the net storage heat flux.
Resumo:
In the 1960s North Atlantic sea surface temperatures (SST) cooled rapidly. The magnitude of the cooling was largest in the North Atlantic subpolar gyre (SPG), and was coincident with a rapid freshening of the SPG. Here we analyze hindcasts of the 1960s North Atlantic cooling made with the UK Met Office’s decadal prediction system (DePreSys), which is initialised using observations. It is shown that DePreSys captures—with a lead time of several years—the observed cooling and freshening of the North Atlantic SPG. DePreSys also captures changes in SST over the wider North Atlantic and surface climate impacts over the wider region, such as changes in atmospheric circulation in winter and sea ice extent. We show that initialisation of an anomalously weak Atlantic Meridional Overturning Circulation (AMOC), and hence weak northward heat transport, is crucial for DePreSys to predict the magnitude of the observed cooling. Such an anomalously weak AMOC is not captured when ocean observations are not assimilated (i.e. it is not a forced response in this model). The freshening of the SPG is also dominated by ocean salt transport changes in DePreSys; in particular, the simulation of advective freshwater anomalies analogous to the Great Salinity Anomaly were key. Therefore, DePreSys suggests that ocean dynamics played an important role in the cooling of the North Atlantic in the 1960s, and that this event was predictable.
Resumo:
Purpose – This paper aims to highlight differences in women's experiences of advancement to partnership in accountancy firms in Germany and the UK and consider the ways in which such differences may be constituted by the institutional context in which they occurred. Design/methodology/approach – This research is based on 60 semi-structured interviews with women partners in Germany and the UK. Techniques adopted from grounded theory were applied. Research limitations/implications – This qualitative research is context-specific and given its cross-national, interdisciplinary nature is limited to the extent that findings cannot be generalised beyond the studied scope. Practical implications – The study points to cross-national differences in women's career advancement in accountancy firms. The findings support extant research suggesting that structured performance evaluation and hiring systems – while not without flaws – are likely more gender-neutral. In addition, the study highlights the potential of headhunters and recruitment agents as an important tool for women to navigate their way out of career culs-de-sac. Originality/value – This research provides unique insights into women partners' experiences of career advancement and, through its interdisciplinary nature, demonstrates the usefulness of employing institutional frameworks in qualitative in-depth studies of this kind
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.