997 resultados para microcosmic optical parameter
Resumo:
Data assimilation is predominantly used for state estimation; combining observational data with model predictions to produce an updated model state that most accurately approximates the true system state whilst keeping the model parameters fixed. This updated model state is then used to initiate the next model forecast. Even with perfect initial data, inaccurate model parameters will lead to the growth of prediction errors. To generate reliable forecasts we need good estimates of both the current system state and the model parameters. This paper presents research into data assimilation methods for morphodynamic model state and parameter estimation. First, we focus on state estimation and describe implementation of a three dimensional variational(3D-Var) data assimilation scheme in a simple 2D morphodynamic model of Morecambe Bay, UK. The assimilation of observations of bathymetry derived from SAR satellite imagery and a ship-borne survey is shown to significantly improve the predictive capability of the model over a 2 year run. Here, the model parameters are set by manual calibration; this is laborious and is found to produce different parameter values depending on the type and coverage of the validation dataset. The second part of this paper considers the problem of model parameter estimation in more detail. We explain how, by employing the technique of state augmentation, it is possible to use data assimilation to estimate uncertain model parameters concurrently with the model state. This approach removes inefficiencies associated with manual calibration and enables more effective use of observational data. We outline the development of a novel hybrid sequential 3D-Var data assimilation algorithm for joint state-parameter estimation and demonstrate its efficacy using an idealised 1D sediment transport model. The results of this study are extremely positive and suggest that there is great potential for the use of data assimilation-based state-parameter estimation in coastal morphodynamic modelling.
Resumo:
Infections involving Salmonella enterica subsp. enterica serovars have serious animal and human health implications; causing gastroenteritis in humans and clinical symptoms, such as diarrhoea and abortion, in livestock. In this study an optical genetic mapping technique was used to screen 20 field isolate strains from four serovars implicated in disease outbreaks. The technique was able to distinguish between the serovars and the available sequenced strains and group them in agreement with similar data from microarrays and PFGE. The optical maps revealed variation in genome maps associated with antimicrobial resistance and prophage content in S. Typhimurium, and separated the S. Newport strains into two clear geographical lineages defined by the presence of prophage sequences. The technique was also able to detect novel insertions that may have had effects on the central metabolism of some strains. Overall optical mapping allowed a greater level of differentiation of genomic content and spatial information than more traditional typing methods.
Resumo:
High-resolution ensemble simulations (Δx = 1 km) are performed with the Met Office Unified Model for the Boscastle (Cornwall, UK) flash-flooding event of 16 August 2004. Forecast uncertainties arising from imperfections in the forecast model are analysed by comparing the simulation results produced by two types of perturbation strategy. Motivated by the meteorology of the event, one type of perturbation alters relevant physics choices or parameter settings in the model's parametrization schemes. The other type of perturbation is designed to account for representativity error in the boundary-layer parametrization. It makes direct changes to the model state and provides a lower bound against which to judge the spread produced by other uncertainties. The Boscastle has genuine skill at scales of approximately 60 km and an ensemble spread which can be estimated to within ∼ 10% with only eight members. Differences between the model-state perturbation and physics modification strategies are discussed, the former being more important for triggering and the latter for subsequent cell development, including the average internal structure of convective cells. Despite such differences, the spread in rainfall evaluated at skilful scales is shown to be only weakly sensitive to the perturbation strategy. This suggests that relatively simple strategies for treating model uncertainty may be sufficient for practical, convective-scale ensemble forecasting.
Resumo:
Vegetation distribution and state have been measured since 1981 by the AVHRR (Advanced Very High Resolution Radiometer) instrument through satellite remote sensing. In this study a correction method is applied to the Pathfinder NDVI (Normalized Difference Vegetation Index) data to create a continuous European vegetation phenology dataset of a 10-day temporal and 0.1° spatial resolution; additionally, land surface parameters for use in biosphere–atmosphere modelling are derived. The analysis of time-series from this dataset reveals, for the years 1982–2001, strong seasonal and interannual variability in European land surface vegetation state. Phenological metrics indicate a late and short growing season for the years 1985–1987, in addition to early and prolonged activity in the years 1989, 1990, 1994 and 1995. These variations are in close agreement with findings from phenological measurements at the surface; spring phenology is also shown to correlate particularly well with anomalies in winter temperature and winter North Atlantic Oscillation (NAO) index. Nevertheless, phenological metrics, which display considerable regional differences, could only be determined for vegetation with a seasonal behaviour. Trends in the phenological phases reveal a general shift to earlier (−0.54 days year−1) and prolonged (0.96 days year−1) growing periods which are statistically significant, especially for central Europe.
Resumo:
Platinum is one of the most common coatings used to optimize mirror reflectivity in soft X-ray beamlines. Normal operation results in optics contamination by carbon-based molecules present in the residual vacuum of the beamlines. The reflectivity reduction induced by a carbon layer at the mirror surface is a major problem in synchrotron radiation sources. A time-dependent photoelectron spectroscopy study of the chemical reactions which take place at the Pt(111) surface under operating conditions is presented. It is shown that the carbon contamination layer growth can be stopped and reversed by low partial pressures of oxygen for optics operated in intense photon beams at liquidnitrogen temperature. For mirrors operated at room temperature the carbon contamination observed for equivalent partial pressures of CO is reduced and the effects of oxygen are observed on a long time scale.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
A mechanism for amplification of mountain waves, and their associated drag, by parametric resonance is investigated using linear theory and numerical simulations. This mechanism, which is active when the Scorer parameter oscillates with height, was recently classified by previous authors as intrinsically nonlinear. Here it is shown that, if friction is included in the simplest possible form as a Rayleigh damping, and the solution to the Taylor-Goldstein equation is expanded in a power series of the amplitude of the Scorer parameter oscillation, linear theory can replicate the resonant amplification produced by numerical simulations with some accuracy. The drag is significantly altered by resonance in the vicinity of n/l_0 = 2, where l_0 is the unperturbed value of the Scorer parameter and n is the wave number of its oscillation. Depending on the phase of this oscillation, the drag may be substantially amplified or attenuated relative to its non-resonant value, displaying either single maxima or minima, or double extrema near n/l_0 = 2. Both non-hydrostatic effects and friction tend to reduce the magnitude of the drag extrema. However, in exactly inviscid conditions, the single drag maximum and minimum are suppressed. As in the atmosphere friction is often small but non-zero outside the boundary layer, modelling of the drag amplification mechanism addressed here should be quite sensitive to the type of turbulence closure employed in numerical models, or to computational dissipation in nominally inviscid simulations.
Resumo:
Ground-based aerosol optical depth (AOD) climatologies at three high-altitude sites in Switzerland (Jungfraujoch and Davos) and Southern Germany (Hohenpeissenberg) are updated and re-calibrated for the period 1995 – 2010. In addition, AOD time-series are augmented with previously unreported data, and are homogenized for the first time. Trend analysis revealed weak AOD trends (λ = 500 nm) at Jungfraujoch (JFJ; +0.007 decade-1), Davos (DAV; +0.002 decade-1) and Hohenpeissenberg (HPB; -0.011 decade-1) where the JFJ and HPB trends were statistically significant at the 95% and 90% confidence levels. However, a linear trend for the JFJ 1995 – 2005 period was found to be more appropriate than for 1995 – 2010 due to the influence of stratospheric AOD which gave a trend -0.003 decade-1 (significant at 95% level). When correcting for a recently available stratospheric AOD time-series, accounting for Pinatubo (1991) and more recent volcanic eruptions, the 1995 – 2010 AOD trends decreased slightly at DAV and HPB but remained weak at +0.000 decade-1 and -0.013 decade-1 (significant at 95% level). The JFJ 1995 – 2005 AOD time-series similarly decreased to -0.003 decade-1 (significant at 95% level). We conclude that despite a more detailed re40 analysis of these three time-series, which have been extended by five years to the end of 2010, a significant decrease in AOD at these three high-altitude sites has still not been observed.
Resumo:
This paper evaluates the relationship between the cloud modification factor (CMF) in the ultraviolet erythe- mal range and the cloud optical depth (COD) retrieved from the Aerosol Robotic Network (AERONET) "cloud mode" algorithm under overcast cloudy conditions (confirmed with sky images) at Granada, Spain, mainly for non-precipitating, overcast and relatively homogenous water clouds. Empirical CMF showed a clear exponential dependence on experimental COD values, decreasing approximately from 0.7 for COD=10 to 0.25 for COD=50. In addition, these COD measurements were used as input in the LibRadtran radia tive transfer code allowing the simulation of CMF values for the selected overcast cases. The modeled CMF exhibited a dependence on COD similar to the empirical CMF, but modeled values present a strong underestimation with respect to the empirical factors (mean bias of 22 %). To explain this high bias, an exhaustive comparison between modeled and experimental UV erythemal irradiance (UVER) data was performed. The comparison revealed that the radiative transfer simulations were 8 % higher than the observations for clear-sky conditions. The rest of the bias (~14 %) may be attributed to the substantial underestimation of modeled UVER with respect to experimental UVER under overcast conditions, although the correlation between both dataset was high (R2 ~ 0.93). A sensitive test showed that the main reason responsible for that underestimation is the experimental AERONET COD used as input in the simulations, which has been retrieved from zenith radiances in the visible range. In this sense, effective COD in the erythemal interval were derived from an iteration procedure based on searching the best match between modeled and experimental UVER values for each selected overcast case. These effective COD values were smaller than AERONET COD data in about 80 % of the overcast cases with a mean relative difference of 22 %.
Resumo:
We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.
Resumo:
We present a detailed case study of the characteristics of auroral forms that constitute the first ionospheric signatures of substorm expansion phase onset. Analysis of the optical frequency and along-arc (azimuthal) wave number spectra provides the strongest constraint to date on the potential mechanisms and instabilities in the near-Earth magnetosphere that accompany auroral onset and which precede poleward arc expansion and auroral breakup. We evaluate the frequency and growth rates of the auroral forms as a function of azimuthal wave number to determine whether these wave characteristics are consistent with current models of the substorm onset mechanism. We find that the frequency, spatial scales, and growth rates of the auroral forms are most consistent with the cross-field current instability or a ballooning instability, most likely triggered close to the inner edge of the ion plasma sheet. This result is supportive of a near-Earth plasma sheet initiation of the substorm expansion phase. We also present evidence that the frequency and phase characteristics of the auroral undulations may be generated via resonant processes operating along the geomagnetic field. Our observations provide the most powerful constraint to date on the ionospheric manifestation of the physical processes operating during the first few minutes around auroral substorm onset.
Resumo:
It has been argued that extended exposure to naturalistic input provides L2 learners with more of an opportunity to converge of target morphosyntactic competence as compared to classroom-only environments, given that the former provide more positive evidence of less salient linguistic properties than the latter (e.g., Isabelli 2004). Implicitly, the claim is that such exposure is needed to fully reset parameters. However, such a position conflicts with the notion of parameterization (cf. Rothman and Iverson 2007). In light of two types of competing generative theories of adult L2 acquisition – the No Impairment Hypothesis (e.g., Duffield and White 1999) and so-called Failed Features approaches (e.g., Beck 1998; Franceschina 2001; Hawkins and Chan 1997), we investigate the verifiability of such a claim. Thirty intermediate L2 Spanish learners were tested in regards to properties of the Null-Subject Parameter before and after study-abroad. The data suggest that (i) parameter resetting is possible and (ii) exposure to naturalistic input is not privileged.
Resumo:
Traditionally functional magnetic resonance imaging (fMRI) has been used to map activity in the human brain by measuring increases in the Blood Oxygenation Level Dependent (BOLD) signal. Often accompanying positive BOLD fMRI signal changes are sustained negative signal changes. Previous studies investigating the neurovascular coupling mechanisms of the negative BOLD phenomenon have used concurrent 2D-optical imaging spectroscopy (2D-OIS) and electrophysiology (Boorman et al., 2010). These experiments suggested that the negative BOLD signal in response to whisker stimulation was a result of an increase in deoxy-haemoglobin and reduced multi-unit activity in the deep cortical layers. However, Boorman et al. (2010) did not measure the BOLD and haemodynamic response concurrently and so could not quantitatively compare either the spatial maps or the 2D-OIS and fMRI time series directly. Furthermore their study utilised a homogeneous tissue model in which is predominantly sensitive to haemodynamic changes in more superficial layers. Here we test whether the 2D-OIS technique is appropriate for studies of negative BOLD. We used concurrent fMRI with 2D-OIS techniques for the investigation of the haemodynamics underlying the negative BOLD at 7 Tesla. We investigated whether optical methods could be used to accurately map and measure the negative BOLD phenomenon by using 2D-OIS haemodynamic data to derive predictions from a biophysical model of BOLD signal changes. We showed that despite the deep cortical origin of the negative BOLD response, if an appropriate heterogeneous tissue model is used in the spectroscopic analysis then 2D-OIS can be used to investigate the negative BOLD phenomenon.