888 resultados para Two-stage stochastic model
Resumo:
This work introduces a new variational Bayes data assimilation method for the stochastic estimation of precipitation dynamics using radar observations for short term probabilistic forecasting (nowcasting). A previously developed spatial rainfall model based on the decomposition of the observed precipitation field using a basis function expansion captures the precipitation intensity from radar images as a set of ‘rain cells’. The prior distributions for the basis function parameters are carefully chosen to have a conjugate structure for the precipitation field model to allow a novel variational Bayes method to be applied to estimate the posterior distributions in closed form, based on solving an optimisation problem, in a spirit similar to 3D VAR analysis, but seeking approximations to the posterior distribution rather than simply the most probable state. A hierarchical Kalman filter is used to estimate the advection field based on the assimilated precipitation fields at two times. The model is applied to tracking precipitation dynamics in a realistic setting, using UK Met Office radar data from both a summer convective event and a winter frontal event. The performance of the model is assessed both traditionally and using probabilistic measures of fit based on ROC curves. The model is shown to provide very good assimilation characteristics, and promising forecast skill. Improvements to the forecasting scheme are discussed
Resumo:
Studies of the determinants and effects of innovation commonly make an assumption about the way in which firms make the decision to innovate, but rarely test this assumption. Using a panel of Irish manufacturing firms we test the performance of two alternative models of the innovation decision, and find that a two-stage model (the firm decides whether to innovate, then whether to perform product only, process only or both) outperforms a one-stage, simultaneous model. We also find that external knowledge sourcing affects the innovation decision and the type of innovation undertaken in a way not previously recognised in the literature. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
This exploratory paper, developing a conceptual model of owner-manager characteristics and access to finance, aims to investigate whether the concept of strategic groups plays a role in the process of small and medium-sized enterprises (SMEs) accessing finance. Strategic groups are groups of firms making similar patterns of investments in order to achieve their goals. This paper explores how strategic groups, which represent a classification of SMEs based upon their realised strategies, helps to provide an understanding of the success of SMEs in raising finance. The data, from a representative survey of 400 SMEs conducted by the Barclays Bank Telephone Research Unit, were subject to two-stage cluster analysis, thus codified into strategic groups using the natural rhythm of the data, rather than any subjective and value-laden categories being imposed by the authors. The findings show clear differentiation between strategic groups of SMEs, the characteristics of their owner-managers, and the financing strategies adopted. As such, the paper develops a novel typology of strategic groups of SMEs which, therefore, informs their financing strategies, as well as advising other stakeholders.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Resumo:
An investigation has been undertaken into the effects of various radiations on commercially made Al-SiO2-Si Capacitors (MOSCs). Detailed studies of the electrical and physical nature of such devices have been used to characterise both virgin and irradiated devices. In particular, an investigation of the nature and causes of dielectric breakdown in MOSCs has revealed that intrinsic breakdown is a two-stage process dominated by charge injection in a pre-breakdown stage; this is associated with localised high-field injection of carriers from the semiconductor substrate to interfacial and bulk charge traps which, it is proposed, leads to the formation of conducting channels through the dielectric with breakdown occurring as a result of the dissipation of the conduction band energy. A study of radiation-induced dielectric breakdown has revealed the possibility of anomalous hot-electron injection to an excess of bulk oxide traps in the ionization channel produced by very heavily ionizing radiation, which leads to intrinsic breakdown in high-field stressed devices. These findings are interpreted in terms of a modification to the model for radiation-induced dielectric breakdown based upon the primary dependence of breakdown on charge injection rather than high-field mechanisms. The results of a detailed investigation of charge trapping and interface state generation in such MOSCs due to various radiations has revealed evidence of neutron induced interface states, and of the generation of positive oxide charge in devices due to all of the radiations tested. In particular, the greater the linear energy transfer of the radiation, the greater the magnitude of charge trapped in the oxide and the greater the number of interface states generated. These findings are interpreted in terms of Si-H and Si-OH bond-breaking at the Si-SiO2 interface which is enhanced by charge carrier transfer to the interface and by anomalous charge injection to compensate for the excess of charge carriers created by the radiation.
Resumo:
The fluid–particle interaction and the impact of different heat transfer conditions on pyrolysis of biomass inside a 150 g/h fluidised bed reactor are modelled. Two different size biomass particles (350 µm and 550 µm in diameter) are injected into the fluidised bed. The different biomass particle sizes result in different heat transfer conditions. This is due to the fact that the 350 µm diameter particle is smaller than the sand particles of the reactor (440 µm), while the 550 µm one is larger. The bed-to-particle heat transfer for both cases is calculated according to the literature. Conductive heat transfer is assumed for the larger biomass particle (550 µm) inside the bed, while biomass–sand contacts for the smaller biomass particle (350 µm) were considered unimportant. The Eulerian approach is used to model the bubbling behaviour of the sand, which is treated as a continuum. Biomass reaction kinetics is modelled according to the literature using a two-stage, semi-global model which takes into account secondary reactions. The particle motion inside the reactor is computed using drag laws, dependent on the local volume fraction of each phase. FLUENT 6.2 has been used as the modelling framework of the simulations with the whole pyrolysis model incorporated in the form of User Defined Function (UDF).
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
The proliferation of visual display terminals (VDTs) in offices is an international phenomenon. Numerous studies have investigated the health implications which can be categorised into visual problems, symptoms of musculo-skelctal discomfort, or psychosocial effects. The psychosocial effects are broader and there is mixed evidence in this area. The inconsistent results from the studies of VDT work so far undertaken may reflect several methodological shortcomings. In an attempt to overcome these deficiencies and to broaden the model of inter-relationships a model was developed to investigate their interactions and Ihc outputs of job satisfaction, stress and ill health. The study was a two-stage, long-term investigation with measures taken before the VDTs were introduced and the same measures taken 12 months after the 'go-live' date. The research was conducted in four offices of the Department of Social Security. The data were analysed for each individual site and in addition the total data were used in a path analysis model. Significant positive relationships were found at the pre-implementation stage between the musculo-skeletal discomfort, psychosomatic ailments, visual complaints and stress. Job satisfaction was negatively related to visual complaints and musculo-skeletal discomfort. Direct paths were found for age and job level with variety found in the job and age with job satisfaction and a negative relationship with the office environment. The only job characteristic which had a direct path to stress was 'dealing with others'. Similar inter-relationships were found in the post-implementation data. However, in addition attributes of the computer system, such as screen brightness and glare, were related positively with stress and negatively with job satisfaction. The comparison of the data at the two stages found that there had been no significant changes in the users' perceptions of their job characteristics and job satisfaction but there was a small and significant reduction in the stress measure.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
The thesis examines and explains the development of occupational exposure limits (OELs) as a means of preventing work related disease and ill health. The research focuses on the USA and UK and sets the work within a certain historical and social context. A subsidiary aim of the thesis is to identify any short comings in OELs and the methods by which they are set and suggest alternatives. The research framework uses Thomas Kuhn's idea of science progressing by means of paradigms which he describes at one point, `lq ... universally recognised scientific achievements that for a time provide model problems and solutions to a community of practitioners. KUHN (1970). Once learned individuals in the community, `lq ... are committed to the same rules and standards for scientific practice. Ibid. Kuhn's ideas are adapted by combining them with a view of industrial hygiene as an applied science-based profession having many of the qualities of non-scientific professions. The great advantage of this approach to OELs is that it keeps the analysis grounded in the behaviour and priorities of the groups which have forged, propounded, used, benefited from, and defended, them. The development and use of OELs on a larger scale is shown to be connected to the growth of a new profession in the USA; industrial hygiene, with the assistance of another new profession; industrial toxicology. The origins of these professions, particularly industrial hygiene, are traced. By examining the growth of the professions and the writings of key individuals it is possible to show how technical, economic and social factors became embedded in the OEL paradigm which industrial hygienists and toxicologists forged. The origin, mission and needs of these professions and their clients made such influences almost inevitable. The use of the OEL paradigm in practice is examined by an analysis of the process of the American Conference of Governmental Industrial Hygienists, Threshold Limit Value (ACGIH, TLV) Committee via the Minutes from 1962-1984. A similar approach is taken with the development of OELs in the UK. Although the form and definition of TLVs has encouraged the belief that they are health-based OELs the conclusion is that they, and most other OELs, are, and always have been, reasonably practicable limits: the degree of risk posed by a substance is weighed against the feasibility and cost of controlling exposure to that substance. The confusion over the status of TLVs and other OELs is seen to be a confusion at the heart of the OEL paradigm and the historical perspective explains why this should be. The paradigm has prevented the creation of truly health-based and, conversely, truly reasonably practicable OELs. In the final part of the thesis the analysis of the development of OELs is set in a contemporary context and a proposal for a two-stage, two-committee procedure for producing sets of OELs is put forward. This approach is set within an alternative OEL paradigm. The advantages, benefits and likely obstacles to these proposals are discussed.
Resumo:
The research is concerned with the application of the computer simulation technique to study the performance of reinforced concrete columns in a fire environment. The effect of three different concrete constitutive models incorporated in the computer simulation on the structural response of reinforced concrete columns exposed to fire is investigated. The material models differed mainly in respect to the formulation of the mechanical properties of concrete. The results from the simulation have clearly illustrated that a more realistic response of a reinforced concrete column exposed to fire is given by a constitutive model with transient creep or appropriate strain effect The assessment of the relative effect of the three concrete material models is considered from the analysis by adopting the approach of a parametric study, carried out using the results from a series of analyses on columns heated on three sides which produce substantial thermal gradients. Three different loading conditions were used on the column; axial loading and eccentric loading both to induce moments in the same sense and opposite sense to those induced by the thermal gradient. An axially loaded column heated on four sides was also considered. The computer modelling technique adopted separated the thermal and structural responses into two distinct computer programs. A finite element heat transfer analysis was used to determine the thermal response of the reinforced concrete columns when exposed to the ISO 834 furnace environment. The temperature distribution histories obtained were then used in conjunction with a structural response program. The effect of the occurrence of spalling on the structural behaviour of reinforced concrete column is also investigated. There is general recognition of the potential problems of spalling but no real investigation into what effect spalling has on the fire resistance of reinforced concrete members. In an attempt to address the situation, a method has been developed to model concrete columns exposed to fire which incorporates the effect of spalling. A total of 224 computer simulations were undertaken by varying the amounts of concrete lost during a specified period of exposure to fire. An array of six percentages of spalling were chosen for one range of simulation while a two stage progressive spalling regime was used for a second range. The quantification of the reduction in fire resistance of the columns against the amount of spalling, heating and loading patterns, and the time at which the concrete spalls appears to indicate that it is the amount of spalling which is the most significant variable in the reduction of fire resistance.
Cross-orientation masking is speed invariant between ocular pathways but speed dependent within them
Resumo:
In human (D. H. Baker, T. S. Meese, & R. J. Summers, 2007b) and in cat (B. Li, M. R. Peterson, J. K. Thompson, T. Duong, & R. D. Freeman, 2005; F. Sengpiel & V. Vorobyov, 2005) there are at least two routes to cross-orientation suppression (XOS): a broadband, non-adaptable, monocular (within-eye) pathway and a more narrowband, adaptable interocular (between the eyes) pathway. We further characterized these two routes psychophysically by measuring the weight of suppression across spatio-temporal frequency for cross-oriented pairs of superimposed flickering Gabor patches. Masking functions were normalized to unmasked detection thresholds and fitted by a two-stage model of contrast gain control (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) that was developed to accommodate XOS. The weight of monocular suppression was a power function of the scalar quantity ‘speed’ (temporal-frequency/spatial-frequency). This weight can be expressed as the ratio of non-oriented magno- and parvo-like mechanisms, permitting a fast-acting, early locus, as befits the urgency for action associated with high retinal speeds. In contrast, dichoptic-masking functions superimposed. Overall, this (i) provides further evidence for dissociation between the two forms of XOS in humans, and (ii) indicates that the monocular and interocular varieties of XOS are space/time scale-dependent and scale-invariant, respectively. This suggests an image-processing role for interocular XOS that is tailored to natural image statistics—very different from that of the scale-dependent (speed-dependent) monocular variety.
Resumo:
The processing conducted by the visual system requires the combination of signals that are detected at different locations in the visual field. The processes by which these signals are combined are explored here using psychophysical experiments and computer modelling. Most of the work presented in this thesis is concerned with the summation of contrast over space at detection threshold. Previous investigations of this sort have been confounded by the inhomogeneity in contrast sensitivity across the visual field. Experiments performed in this thesis find that the decline in log contrast sensitivity with eccentricity is bilinear, with an initial steep fall-off followed by a shallower decline. This decline is scale-invariant for spatial frequencies of 0.7 to 4 c/deg. A detailed map of the inhomogeneity is developed, and applied to area summation experiments both by incorporating it into models of the visual system and by using it to compensate stimuli in order to factor out the effects of the inhomogeneity. The results of these area summation experiments show that the summation of contrast over area is spatially extensive (occurring over 33 stimulus carrier cycles), and that summation behaviour is the same in the fovea, parafovea, and periphery. Summation occurs according to a fourth-root summation rule, consistent with a “noisy energy” model. This work is extended to investigate the visual deficit in amblyopia, finding that area summation is normal in amblyopic observers. Finally, the methods used to study the summation of threshold contrast over area are adapted to investigate the integration of coherent orientation signals in a texture. The results of this study are described by a two-stage model, with a mandatory local combination stage followed by flexible global pooling of these local outputs. In each study, the results suggest a more extensive combination of signals in vision than has been previously understood.
Resumo:
This study investigates the use of reported loan loss provisions (LLP) by investors in their valuations of banks within the Middle East and North Africa region between the years 2006 and 2011. We decompose LLP into discretionary and non-discretionary components to test for differential valuations in the two banking sectors. We use alternative criteria to define the components of LLP in banks: loan quality/size and earnings management/ manipulation incentives. We employ a price-level valuation model estimated using two-stage analyses. We find that LLP has positive value relevance to investors in both banking sectors. Investors in Islamic banks price the discretionary component relatively lower than their conventional counterparts. We attribute this result to differences in product and governance structures as well as to the religious perception of Islamic banking. In both banking sectors, investors construe an increase in the non-discretionary component as irrelevant valuation information. Our results are relevant to bank regulators in showing the signalling effect of LLP to bank value and stability.
Resumo:
This study investigates the use of reported loan loss provisions (LLP) by investors in their valuations of banks within the Middle East and North Africa region between the years 2006 and 2011. We decompose LLP into discretionary and non-discretionary components to test for differential valuations in the two banking sectors. We use alternative criteria to define the components of LLP in banks: loan quality/size and earnings management/manipulation incentives. We employ a price-level valuation model estimated using two-stage analyses. We find that LLP has positive value relevance to investors in both banking sectors. Investors in Islamic banks price the discretionary component relatively lower than their conventional counterparts. We attribute this result to differences in product and governance structures as well as to the religious perception of Islamic banking. In both banking sectors, investors construe an increase in the non-discretionary component as irrelevant valuation information. Our results are relevant to bank regulators in showing the signalling effect of LLP to bank value and stability. © 2013 Elsevier B.V.