891 resultados para choice of partner
Resumo:
Transient episodes of synchronisation of neuronal activity in particular frequency ranges are thought to underlie cognition. Empirical mode decomposition phase locking (EMDPL) analysis is a method for determining the frequency and timing of phase synchrony that is adaptive to intrinsic oscillations within data, alleviating the need for arbitrary bandpass filter cut-off selection. It is extended here to address the choice of reference electrode and removal of spurious synchrony resulting from volume conduction. Spline Laplacian transformation and independent component analysis (ICA) are performed as pre-processing steps, and preservation of phase synchrony between synthetic signals. combined using a simple forward model, is demonstrated. The method is contrasted with use of bandpass filtering following the same preprocessing steps, and filter cut-offs are shown to influence synchrony detection markedly. Furthermore, an approach to the assessment of multiple EEG trials using the method is introduced, and the assessment of statistical significance of phase locking episodes is extended to render it adaptive to local phase synchrony levels. EMDPL is validated in the analysis of real EEG data, during finger tapping. The time course of event-related (de)synchronisation (ERD/ERS) is shown to differ from that of longer range phase locking episodes, implying different roles for these different types of synchronisation. It is suggested that the increase in phase locking which occurs just prior to movement, coinciding with a reduction in power (or ERD) may result from selection of the neural assembly relevant to the particular movement. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
This paper describes the design and manufacture of the filters and antireflection coatings used in the HIRDLS instrument. The multilayer design of the filters and coatings, choice of layer materials, and the deposition techniques adopted to ensure adequate layer thickness control is discussed. The spectral assessment of the filters and coatings is carried out using a FTIR spectrometer; some measurement results are presented together with discussion of measurement accuracy and the identification and avoidance of measurement artifacts. The post-deposition processing of the filters by sawing to size, writing of an identification code onto the coatings and the environmental testing of the finished filters are also described.
Resumo:
Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society
Resumo:
The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.
Resumo:
A good portfolio structure enables an investor to diversify more effectively and understand systematic influences on their performance. However, in the property market, the choice of structure is affected by data constraints and convenience. Using individual return data, this study tests the hypothesis that some common structures in the UK do not explain a significant amount about property returns. It is found that, in the periods studied, not all the structures were effective and, for the annual returns, no structures were significant in all periods. The results suggest that the drivers represented by the structures take some time to be reflected in individual property returns. They also confirm the results of other studies in finding property type a much stronger factor in explaining returns than regions.
Resumo:
This paper reviews the evidence relating to the question: does the risk of fungicide resistance increase or decrease with dose? The development of fungicide resistance progresses through three key phases. During the ‘emergence phase’ the resistant strain has to arise through mutation and invasion. During the subsequent ‘selection phase’, the resistant strain is present in the pathogen population and the fraction of the pathogen population carrying the resistance increases due to the selection pressure caused by the fungicide. During the final phase of ‘adjustment’, the dose or choice of fungicide may need to be changed to maintain effective control over a pathogen population where resistance has developed to intermediate levels. Emergence phase: no experimental publications and only one model study report on the emergence phase, and we conclude that work in this area is needed. Selection phase: all the published experimental work, and virtually all model studies, relate to the selection phase. Seven peer reviewed and four non-peer reviewed publications report experimental evidence. All show increased selection for fungicide resistance with increased fungicide dose, except for one peer reviewed publication that does not detect any selection irrespective of dose and one conference proceedings publication which claims evidence for increased selection at a lower dose. In the mathematical models published, no evidence has been found that a lower dose could lead to a higher risk of fungicide resistance selection. We discuss areas of the dose rate debate that need further study. These include further work on pathogen-fungicide combinations where the pathogen develops partial resistance to the fungicide and work on the emergence phase.
Resumo:
The Along-Track Scanning Radiometers (ATSRs) provide a long time-series of measurements suitable for the retrieval of cloud properties. This work evaluates the freely-available Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) dataset (version 3) created from the ATSR-2 (1995�2003) and Advanced ATSR (AATSR; 2002 onwards) records. Users are recommended to consider only retrievals flagged as high-quality, where there is a good consistency between the measurements and the retrieved state (corresponding to about 60% of converged retrievals over sea, and more than 80% over land). Cloud properties are found to be generally free of any significant spurious trends relating to satellite zenith angle. Estimates of the random error on retrieved cloud properties are suggested to be generally appropriate for optically-thick clouds, and up to a factor of two too small for optically-thin cases. The correspondence between ATSR-2 and AATSR cloud properties is high, but a relative calibration difference between the sensors of order 5�10% at 660 nm and 870 nm limits the potential of the current version of the dataset for trend analysis. As ATSR-2 is thought to have the better absolute calibration, the discussion focusses on this portion of the record. Cloud-top heights from GRAPE compare well to ground-based data at four sites, particularly for shallow clouds. Clouds forming in boundary-layer inversions are typically around 1 km too high in GRAPE due to poorly-resolved inversions in the modelled temperature profiles used. Global cloud fields are compared to satellite products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements, and a climatology of liquid water content derived from satellite microwave radiometers. In all cases the main reasons for differences are linked to differing sensitivity to, and treatment of, multi-layer cloud systems. The correlation coefficient between GRAPE and the two MODIS products considered is generally high (greater than 0.7 for most cloud properties), except for liquid and ice cloud effective radius, which also show biases between the datasets. For liquid clouds, part of the difference is linked to choice of wavelengths used in the retrieval. Total cloud cover is slightly lower in GRAPE (0.64) than the CALIOP dataset (0.66). GRAPE underestimates liquid cloud water path relative to microwave radiometers by up to 100 g m�2 near the Equator and overestimates by around 50 g m�2 in the storm tracks. Finally, potential future improvements to the algorithm are outlined.
Resumo:
Consumer studies of meat have tended to use quantitative methodologies providing a wealth of statistically malleable information, but little in-depth insight into consumer perceptions of meat. The aim of the present study was therefore, to understand factors perceived important in the selection of chicken meat, using qualitative methodology. Focus group discussions were tape recorded, transcribed verbatim and content analysed for major themes. Themes arising implied that “appearance” and “convenience” were the most important determinants of choice of chicken meat and these factors appeared to be associated with perceptions of freshness, healthiness, product versatility and concepts of value. A descriptive model has been developed to illustrate the interrelationship between factors affecting chicken meat choice. This study indicates that those involved in the production and retailing of chicken products should concentrate upon product appearance and convenience as market drivers for their products.
Resumo:
Purpose – The purpose of this paper is to investigate the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Design/methodology/approach – Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. Findings – It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models. Evidence is found of equifinality in the outputs of a simple, aggregated model of development viability relative to more complex, disaggregated models. Originality/value – Development viability appraisal has become increasingly important in the planning system. Consequently, the theory, application and outputs from development appraisal are under intense scrutiny from a wide range of users. However, there has been very little published evaluation of viability models. This paper contributes to the limited literature in this area.
Resumo:
This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models.
Resumo:
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We �rst show that by using a general functional decomposition for space-time dependent forcings, we can de�ne elementary susceptibilities that allow to construct the response of the system to general perturbations. Starting from the de�nition of SRB measure, we then study the consequence of taking di�erent sampling schemes for analysing the response of the system. We show that only a speci�c choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows to obtain the formula �rst presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be �ne-tuned to make the de�nition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analyzing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Resumo:
The paper presents an overview of dynamic systems with inherent delays in both feedforward and feedback paths and how the performance of such systems can be affected by such delays. The authors concentrate on visually guided systems, where the behaviour of the system is largely dependent on the results of the vision sensors, with particular reference to active robot heads (real-time gaze control). We show how the performance of such systems can deteriorate substantially with the presence of unknown and/or variable delays. Considered choice of system architecture, however, allows the performance of active vision systems to be optimised with respect to the delays present in the system.
Resumo:
We study the regularization problem for linear, constant coefficient descriptor systems Ex' = Ax+Bu, y1 = Cx, y2 = Γx' by proportional and derivative mixed output feedback. Necessary and sufficient conditions are given, which guarantee that there exist output feedbacks such that the closed-loop system is regular, has index at most one and E+BGΓ has a desired rank, i.e., there is a desired number of differential and algebraic equations. To resolve the freedom in the choice of the feedback matrices we then discuss how to obtain the desired regularizing feedback of minimum norm and show that this approach leads to useful results in the sense of robustness only if the rank of E is decreased. Numerical procedures are derived to construct the desired feedback gains. These numerical procedures are based on orthogonal matrix transformations which can be implemented in a numerically stable way.
Resumo:
Producing projections of future crop yields requires careful thought about the appropriate use of atmosphere-ocean global climate model (AOGCM) simulations. Here we describe and demonstrate multiple methods for ‘calibrating’ climate projections using an ensemble of AOGCM simulations in a ‘perfect sibling’ framework. Crucially, this type of analysis assesses the ability of each calibration methodology to produce reliable estimates of future climate, which is not possible just using historical observations. This type of approach could be more widely adopted for assessing calibration methodologies for crop modelling. The calibration methods assessed include the commonly used ‘delta’ (change factor) and ‘nudging’ (bias correction) approaches. We focus on daily maximum temperature in summer over Europe for this idealised case study, but the methods can be generalised to other variables and other regions. The calibration methods, which are relatively easy to implement given appropriate observations, produce more robust projections of future daily maximum temperatures and heat stress than using raw model output. The choice over which calibration method to use will likely depend on the situation, but change factor approaches tend to perform best in our examples. Finally, we demonstrate that the uncertainty due to the choice of calibration methodology is a significant contributor to the total uncertainty in future climate projections for impact studies. We conclude that utilising a variety of calibration methods on output from a wide range of AOGCMs is essential to produce climate data that will ensure robust and reliable crop yield projections.