14 resultados para Matching-to-sample arbitrário
em Aston University Research Archive
Resumo:
The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.
Resumo:
Recent results on direct femtosecond inscription of straight low-loss waveguides in borosilicate glass are presented. We also demonstrate lowest ever losses in curvilinear waveguides, which we use as main building blocks for integrated photonics circuits. Low-loss waveguides are of great importance to a variety of applications of integrated optics. We report on recent results of direct femtosecond fabrication of smooth low-loss waveguides in standard optical glass by means of femtosecond chirped-pulse oscillator only (Scientific XL, Femtolasers), operating at the repetition rate of 11 MHz, at the wavelength of 800 nm, with FWHM pulse duration of about 50 fs, and a spectral widths of 30 nm. The pulse energy on target was up to 70 nJ. In transverse inscription geometry, we inscribed waveguides at the depth from 10 to 300 micrometers beneath the surface in the samples of 50 x 50 x 1 mm dimensions made of pure BK7 borosilicate glass. The translation of the samples accomplished by 2D air-bearing stage (Aerotech) with sub-micrometer precision at a speed of up to 100 mm per second (hardware limit). Third direction of translation (Z-, along the inscribing beam or perpendicular to sample plane) allows truly 3D structures to be fabricated. The waveguides were characterized in terms of induced refractive index contrast, their dimensions and cross-sections, mode-field profiles, total insertion losses at both 633 nm and 1550 nm. There was almost no dependence on polarization for the laser inscription. The experimental conditions – depth, laser polarization, pulse energy, translation speed and others, were optimized for minimum insertion losses when coupled to a standard optical fibre SMF-28. We found coincidence of our optimal inscription conditions with recently published by other groups [1, 3] despite significant difference in practically all experimental parameters. Using optimum regime for straight waveguides fabrication, we inscribed a set of curvilinear tracks, which were arranged in a way to ensure the same propagation length (and thus losses) and coupling conditions, while radii of curvature varied from 3 to 10 mm. This allowed us to measure bend-losses – they less than or about 1 dB/cm at R=10 mm radius of curvature. We also demonstrate a possibility to fabricate periodical perturbations of the refractive index in such waveguides with the periods using the same set-up. We demonstrated periods of about 520 nm, which allowed us to fabricate wavelength-selective devices using the same set-up. This diversity as well as very short time for inscription (the optimum translation speed was found to be 40 mm/sec) makes our approach attractive for industrial applications, for example, in next generation high-speed telecom networks.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.
Resumo:
Recent results on direct femtosecond inscription of straight low-loss waveguides in borosilicate glass are presented. We also demonstrate lowest ever losses in curvilinear waveguides, which we use as main building blocks for integrated photonics circuits. Low-loss waveguides are of great importance to a variety of applications of integrated optics. We report on recent results of direct femtosecond fabrication of smooth low-loss waveguides in standard optical glass by means of femtosecond chirped-pulse oscillator only (Scientific XL, Femtolasers), operating at the repetition rate of 11 MHz, at the wavelength of 800 nm, with FWHM pulse duration of about 50 fs, and a spectral widths of 30 nm. The pulse energy on target was up to 70 nJ. In transverse inscription geometry, we inscribed waveguides at the depth from 10 to 300 micrometers beneath the surface in the samples of 50 x 50 x 1 mm dimensions made of pure BK7 borosilicate glass. The translation of the samples accomplished by 2D air-bearing stage (Aerotech) with sub-micrometer precision at a speed of up to 100 mm per second (hardware limit). Third direction of translation (Z-, along the inscribing beam or perpendicular to sample plane) allows truly 3D structures to be fabricated. The waveguides were characterized in terms of induced refractive index contrast, their dimensions and cross-sections, mode-field profiles, total insertion losses at both 633 nm and 1550 nm. There was almost no dependence on polarization for the laser inscription. The experimental conditions – depth, laser polarization, pulse energy, translation speed and others, were optimized for minimum insertion losses when coupled to a standard optical fibre SMF-28. We found coincidence of our optimal inscription conditions with recently published by other groups [1, 3] despite significant difference in practically all experimental parameters. Using optimum regime for straight waveguides fabrication, we inscribed a set of curvilinear tracks, which were arranged in a way to ensure the same propagation length (and thus losses) and coupling conditions, while radii of curvature varied from 3 to 10 mm. This allowed us to measure bend-losses – they less than or about 1 dB/cm at R=10 mm radius of curvature. We also demonstrate a possibility to fabricate periodical perturbations of the refractive index in such waveguides with the periods using the same set-up. We demonstrated periods of about 520 nm, which allowed us to fabricate wavelength-selective devices using the same set-up. This diversity as well as very short time for inscription (the optimum translation speed was found to be 40 mm/sec) makes our approach attractive for industrial applications, for example, in next generation high-speed telecom networks.
Resumo:
To reveal the moisture migration mechanism of the unsaturated red clays, which are sensitive to water content change and widely distributed in South China, and then rationally use them as a filling material for highway embankments, a method to measure the water content of red clay cylinders using X-ray computed tomography (CT) was proposed and verified. Then, studies on the moisture migrations in the red clays under the rainfall and ground water level were performed at different degrees of compaction. The results show that the relationship between dry density, water content, and CT value determined from X-ray CT tests can be used to nondestructively measure the water content of red clay cylinders at different migration time, which avoids the error reduced by the sample-to-sample variation. The rainfall, ground water level, and degree of compaction are factors that can significantly affect the moisture migration distance and migration rate. Some techniques, such as lowering groundwater table and increasing degree of compaction of the red clays, can be used to prevent or delay the moisture migration in highway embankments filled with red clays.
Resumo:
Multiple transformative forces target marketing, many of which derive from new technologies that allow us to sample thinking in real time (i.e., brain imaging), or to look at large aggregations of decisions (i.e., big data). There has been an inclination to refer to the intersection of these technologies with the general topic of marketing as “neuromarketing”. There has not been a serious effort to frame neuromarketing, which is the goal of this paper. Neuromarketing can be compared to neuroeconomics, wherein neuroeconomics is generally focused on how individuals make “choices”, and represent distributions of choices. Neuromarketing, in contrast, focuses on how a distribution of choices can be shifted or “influenced”, which can occur at multiple “scales” of behavior (e.g., individual, group, or market/society). Given influence can affect choice through many cognitive modalities, and not just that of valuation of choice options, a science of influence also implies a need to develop a model of cognitive function integrating attention, memory, and reward/aversion function. The paper concludes with a brief description of three domains of neuromarketing application for studying influence, and their caveats.
Resumo:
Motivated by the historically poor productivity performance of Northern Ireland firms and the longstanding productivity gap with the UK, the aim of this thesis is to examine, through the use of firm-level data, how exporting, innovation and public financial assistance impact on firm productivity growth. These particular activities are investigated due to the continued policy focus on their link to productivity growth and the theoretical claims of a direct positive relationship. In order to undertake these analyses a newly constructed dataset is used which links together cross-sectional and longitudinal data over the 1998-2008 period from the Annual Business Survey, the Manufacturing Sales and Export Survey; the Community Innovation Survey and Invest NI Selective Financial Assistance (SFA) payment data. Econometric methodologies are employed to estimate each of the relationships with regards to productivity growth, making use in particular of Heckman selection techniques and propensity score matching to take account of critical issues of endogeneity and selection bias. The results show that more productive firms self-select into exporting but there is no resulting productivity effect from starting to export; contesting the argument for learning-by-exporting. Product innovation is also found to have no impact on productivity growth over a four year period but there is evidence of a negative process innovation impact, likely to reflect temporary learning effects. Finally SFA assistance, including the amount of the payment, is found to have no short term impact on productivity growth suggesting substantial deadweight effects and/or targeting of inefficient firms. The results provide partial evidence as to why Northern Ireland has failed to narrow the productivity gap with the rest of the UK. The analyses further highlight the need for access to comprehensive firm-level data for research purposes, not least to underpin robust evidence-based policymaking.
Resumo:
The supply chain can be a source of competitive advantage for the firm. Simulation is an effective tool for investigating supply chain problems. The three main simulation approaches in the supply chain context are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). A sample from the literature suggests that whilst SD and ABM have been used to address strategic and planning problems, DES has mainly been used on planning and operational problems., A review of received wisdom suggests that historically, driven by custom and practice, certain simulation techniques have been focused on certain problem types. A theoretical review of the techniques, however, suggests that the scope of their application should be much wider and that supply chain practitioners could benefit from applying them in this broader way.
Resumo:
Traditional approaches to calculate total factor productivity (TFP) change through Malmquist indexes rely on distance functions. In this paper we show that the use of distance functions as a means to calculate TFP change may introduce some bias in the analysis, and therefore we propose a procedure that calculates TFP change through observed values only. Our total TFP change is then decomposed into efficiency change, technological change, and a residual effect. This decomposition makes use of a non-oriented measure in order to avoid problems associated with the traditional use of radial oriented measures, especially when variable returns to scale technologies are to be compared. The proposed approach is applied in this paper to a sample of Portuguese bank branches.
Resumo:
Fifteen Miscanthus genotypes grown in five locations across Europe were analysed to investigate the influence of genetic and environmental factors on cell wall composition. Chemometric techniques combining near infrared reflectance spectroscopy (NIRS) and conventional chemical analyses were used to construct calibration models for determination of acid detergent lignin (ADL), acid detergent fibre (ADF), and neutral detergent fibre (NDF) from sample spectra. Results generated were subsequently converted to lignin, cellulose and hemicellulose content and used to assess the genetic and environmental variation in cell wall composition of Miscanthus and to identify genotypes which display quality traits suitable for exploitation in a range of energy conversion systems. The NIRS calibration models developed were found to predict concentrations with a good degree of accuracy based on the coefficient of determination (R2), standard error of calibration (SEC), and standard error of cross-validation (SECV) values. Across all sites mean lignin, cellulose and hemicellulose values in the winter harvest ranged from 76–115 g kg-1, 412–529 g kg-1, and 235–338 g kg-1 respectively. Overall, of the 15 genotypes Miscanthus x giganteus and Miscanthus sacchariflorus contained higher lignin and cellulose concentrations in the winter harvest. The degree of observed genotypic variation in cell wall composition indicates good potential for plant breeding and matching feedstocks to be optimised to different energy conversion processes.
Resumo:
Frequency, time and places of charging and discharging have critical impact on the Quality of Experience (QoE) of using Electric Vehicles (EVs). EV charging and discharging scheduling schemes should consider both the QoE of using EV and the load capacity of the power grid. In this paper, we design a traveling plan-aware scheduling scheme for EV charging in driving pattern and a cooperative EV charging and discharging scheme in parking pattern to improve the QoE of using EV and enhance the reliability of the power grid. For traveling planaware scheduling, the assignment of EVs to Charging Stations (CSs) is modeled as a many-to-one matching game and the Stable Matching Algorithm (SMA) is proposed. For cooperative EV charging and discharging in parking pattern, the electricity exchange between charging EVs and discharging EVs in the same parking lot is formulated as a many-to-many matching model with ties, and we develop the Pareto Optimal Matching Algorithm (POMA). Simulation results indicates that the SMA can significantly improve the average system utility for EV charging in driving pattern, and the POMA can increase the amount of electricity offloaded from the grid which is helpful to enhance the reliability of the power grid.