52 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor
Resumo:
Previous research on corporate social responsibility mainly focuses on its nature and impact on business performance. This paper reports on a study that contributes to our understanding of the determinants of corporate social responsibility by focusing specifically on the role played by three strategically important variables, namely government regulation, ownership structure and market orientation. Results of a survey of 586 general managers of hotels in China suggest that the market orientation is the most significant predicator of corporate social responsibility followed by government regulation. In contrast, the ownership structure is found to have little effect. The implications of the findings for managers in China are discussed.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.
Resumo:
Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
On the basis of a review of the substantive quality and service marketing literature current knowledge regarding service quality expectations was found either absent or deficient. The phenomenon is of increasing importance to both marketing researchers and management and was therefore judged worthy of scholarly consideration. Because the service quality literature was insufficiently rich when embarking on the thesis three basic research issues were considered namely the nature, determinants, and dynamics of service quality expectations. These issues were first conceptually and then qualitatively explored. This process generated research hypotheses mainly relating to a model which were subsequently tested through a series of empirical investigations using questionnaire data from field studies in a single context. The results were internally consistent and strongly supported the main research hypotheses. It was found that service quality expectations can be meaningfully described in terms of generic/service-specific, intangible/tangible, and process/outcome categories. Service-specific quality expectations were also shown to be determined by generic service quality expectations, demographic variables, personal values, psychological needs, general service sophistication, service-specific sophistication, purchase motives, and service-specific information when treating service class involvement as an exogenous variable. Subjects who had previously not directly experienced a particular service were additionally found to revise their expectations of quality when exposed to the service with change being driven by a sub-set of identified determinants.
Resumo:
This thesis is concerned with exact solutions of Einstein's field equations of general relativity, in particular, when the source of the gravitational field is a perfect fluid with a purely electric Weyl tensor. General relativity, cosmology and computer algebra are discussed briefly. A mathematical introduction to Riemannian geometry and the tetrad formalism is then given. This is followed by a review of some previous results and known solutions concerning purely electric perfect fluids. In addition, some orthonormal and null tetrad equations of the Ricci and Bianchi identities are displayed in a form suitable for investigating these space-times. Conformally flat perfect fluids are characterised by the vanishing of the Weyl tensor and form a sub-class of the purely electric fields in which all solutions are known (Stephani 1967). The number of Killing vectors in these space-times is investigated and results presented for the non-expanding space-times. The existence of stationary fields that may also admit 0, 1 or 3 spacelike Killing vectors is demonstrated. Shear-free fluids in the class under consideration are shown to be either non-expanding or irrotational (Collins 1984) using both orthonormal and null tetrads. A discrepancy between Collins (1984) and Wolf (1986) is resolved by explicitly solving the field equations to prove that the only purely electric, shear-free, geodesic but rotating perfect fluid is the Godel (1949) solution. The irrotational fluids with shear are then studied and solutions due to Szafron (1977) and Allnutt (1982) are characterised. The metric is simplified in several cases where new solutions may be found. The geodesic space-times in this class and all Bianchi type 1 perfect fluid metrics are shown to have a metric expressible in a diagonal form. The position of spherically symmetric and Bianchi type 1 space-times in relation to the general case is also illustrated.
The compressive creep and load relaxation properties of a series of high aluminium zinc-based alloys
Resumo:
A new family of commercial zinc alloys designated as ZA8, ZA12, and ZA27 and high damping capacity alloys including Cosmal and Supercosmal and aluminium alloy LM25 were investigated for compressive creep and load relaxation behaviour under a series of temperatures and stresses. A compressive creep machine was designed to test the sand cast hollow cylindrical test specimens of these alloys. For each compressive creep experiment the variation of creep strain was presented in the form of graphs plotted as percentage of creep strain () versus time in seconds (s). In all cases, the curves showed the same general form of the creep curve, i.e. a primary creep stage, followed by a linear steady-state region (secondary creep). In general, it was observed that alloy ZA8 had the least primary creep among the commercial zinc-based alloys and ZA27 the greatest. The extent of primary creep increased with aluminium content to that of ZA27 then declined to Supercosmal. The overall creep strength of ZA27 was generally less than ZA8 and ZA12 but it showed better creep strength than ZA8 and ZA12 at high temperature and high stress. In high damping capacity alloys, Supercosmal had less primary creep and longer secondary creep regions and also had the lowest minimum creep rate among all the tested alloys. LM25 exhibited almost no creep at maximum temperature and stress used in this research work. Total creep elongation was shown to be well correlated using an empirical equation. Stress exponent and activation energies were calculated and found to be consistent with the creep mechanism of dislocation climb. The primary α and β phases in the as-cast structures decomposed to lamellar phases on cooling, with some particulates at dendrite edges and grain boundaries. Further breakdown into particulate bodies occurred during creep testing, and zinc bands developed at the highest test temperature of 160°C. The results of load relaxation testing showed that initially load loss proceeded rapidly and then deminished gradually with time. Load loss increased with temperature and almost all the curves approximated to a logarithmic decay of preload with time. ZA alloys exhibited almost the same load loss at lower temperature, but at 120°C ZA27 improved its relative performance with the passage of time. High damping capacity alloys and LM25 had much better resistance to load loss than ZA alloys and LM25 was found to be the best against load loss among these alloys. A preliminary equation was derived to correlate the retained load with time and temperature.
Resumo:
Reported in this thesis are test results of 37 eccentrically prestressed beams with stirrups. Single variable parameters were investigated including the prestressing force, the prestressing steel area, the concrete strength, the aspect ratio h/b and the stirrups size and spacing. Interaction of bending, torsion and shear was also investigated by testing a series of beams subjected to varying bending/torsional moment ratios. For the torsional strength an empirical expression of linear format is proposed and can be rearranged in a non-dimensional interaction form: T/To+V/Vo+M/Mo+Ps/Po+Fs/Fo=Pc2/Fsp. This formula which is based on an average experimental steel stress lower than the yield point is compared with 243 prestressed beams containing ' stirrups, including the author's test beams, and good agreement is obtained. For the theoretical analysis of the problem of torsion combined with bending and shear in concrete beams with stirrups, the method of torque-friction is proposed and developed using an average steel stress. A general linear interaction equation for combined torsion with bending and/or shear is proposed in the following format: (fi) T/Tu=1 where (fi) is a combined loading factor to modify the pure ultimate strength for differing cases of torsion with bending and/or shear. From the analysis of 282 reinforced and prestressed concrete beams containing stirrups, including the present investigation, good agreement is obtained between the method and the test results. It is concluded that the proposed method provides a rational and simple basis for predicting the ultimate torisional strength and may also be developed for design purposes.
Resumo:
This research investigates the contribution that Geographic Information Systems (GIS) can make to the land suitability process used to determine the effects of a climate change scenario. The research is intended to redress the severe under representation of Developing countries within the literature examining the impacts of climatic change upon crop productivity. The methodology adopts some of the Intergovernmental Panel on Climate Change (IPCC) estimates for regional climate variations, based upon General Circulation Model predictions (GCMs) and applies them to a baseline climate for Bangladesh. Utilising the United Nations Food & Agricultural Organisation's Agro-ecological Zones land suitability methodology and crop yield model, the effects of the scenario upon agricultural productivity on 14 crops are determined. A Geographic Information System (IDRISI) is adopted in order to facilitate the methodology, in conjunction with a specially designed spreadsheet, used to determine the yield and suitability rating for each crop. A simple optimisation routine using the GIS is incorporated to provide an indication of the 'maximum theoretical' yield available to the country, should the most calorifically significant crops be cultivated on each land unit both before and after the climate change scenario. This routine will provide an estimate of the theoretical population supporting capacity of the country, both now and in the future, to assist with planning strategies and research. The research evaluates the utility of this alternative GIS based methodology for the land evaluation process and determines the relative changes in crop yields that may result from changes in temperature, photosynthesis and flooding hazard frequency. In summary, the combination of a GIS and a spreadsheet was successful, the yield prediction model indicates that the application of the climate change scenario will have a deleterious effect upon the yields of the study crops. Any yield reductions will have severe implications for agricultural practices. The optimisation routine suggests that the 'theoretical maximum' population supporting capacity is well in excess of current and future population figures. If this agricultural potential could be realised however, it may provide some amelioration from the effects of climate change.
Resumo:
The thesis raises the question of whether or not in an age of internationalisation and globalisation, the cultural differences which exist between Germany and Ireland are still relevant to German-Irish corporate relationships or have internationally accepted best practices removed culture from the equation? The first three chapters establish the theoretical framework of the thesis by outlining the broadly culturalist/institutionalist approach, based on the work of Hofstede and Maurice et al, to be pursued, profiling the business cultures of both countries by analysing the components of their respective national institutional frameworks, and the examining existing approaches to the study of mother company-foreign subsidiary relationships. Chapters four to seven constitute the empirical section of the thesis. Using the interviews carried out with two sample groups (Sample Group A: 15 German mother companies and 14 of their Irish operations and Sample Group B: 7 Irish mother companies and 9 of their German operations), the mother companies in both groups are examined to see whether or not they demonstrate characteristics which are in keeping with their national business cultures. Their foreign operations are then analysed as is the mother company-foreign subsidiary relationship to determine whether or not any mother company influences are visible. The general approaches adopted by the two groups of mother companies to their foreign operations are compared and contrasted. Finally, differences in national attitudes and values are identified and their impact assessed. The analysis reveals that despite existing pressures towards convergence, the cultural differences between both countries are still relevant to the relationship particularly at the level of attitudes and values and although similarities in the mother company approaches to their subsidiaries are present, national specificities may nevertheless be detected.
Resumo:
This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.
Resumo:
We consider the random input problem for a nonlinear system modeled by the integrable one-dimensional self-focusing nonlinear Schrödinger equation (NLSE). We concentrate on the properties obtained from the direct scattering problem associated with the NLSE. We discuss some general issues regarding soliton creation from random input. We also study the averaged spectral density of random quasilinear waves generated in the NLSE channel for two models of the disordered input field profile. The first model is symmetric complex Gaussian white noise and the second one is a real dichotomous (telegraph) process. For the former model, the closed-form expression for the averaged spectral density is obtained, while for the dichotomous real input we present the small noise perturbative expansion for the same quantity. In the case of the dichotomous input, we also obtain the distribution of minimal pulse width required for a soliton generation. The obtained results can be applied to a multitude of problems including random nonlinear Fraunhoffer diffraction, transmission properties of randomly apodized long period Fiber Bragg gratings, and the propagation of incoherent pulses in optical fibers.
Resumo:
Control design for stochastic uncertain nonlinear systems is traditionally based on minimizing the expected value of a suitably chosen loss function. Moreover, most control methods usually assume the certainty equivalence principle to simplify the problem and make it computationally tractable. We offer an improved probabilistic framework which is not constrained by these previous assumptions, and provides a more natural framework for incorporating and dealing with uncertainty. The focus of this paper is on developing this framework to obtain an optimal control law strategy using a fully probabilistic approach for information extraction from process data, which does not require detailed knowledge of system dynamics. Moreover, the proposed control method framework allows handling the problem of input-dependent noise. A basic paradigm is proposed and the resulting algorithm is discussed. The proposed probabilistic control method is for the general nonlinear class of discrete-time systems. It is demonstrated theoretically on the affine class. A nonlinear simulation example is also provided to validate theoretical development.