52 resultados para removing
Resumo:
For a targeted observations case, the dependence of the size of the forecast impact on the targeted dropsonde observation error in the data assimilation is assessed. The targeted observations were made in the lee of Greenland; the dependence of the impact on the proximity of the observations to the Greenland coast is also investigated. Experiments were conducted using the Met Office Unified Model (MetUM), over a limited-area domain at 24-km grid spacing, with a four-dimensional variational data assimilation (4D-Var) scheme. Reducing the operational dropsonde observation errors by one-half increases the maximum forecast improvement from 5% to 7%–10%, measured in terms of total energy. However, the largest impact is seen by replacing two dropsondes on the Greenland coast with two farther from the steep orography; this increases the maximum forecast improvement from 5% to 18% for an 18-h forecast (using operational observation errors). Forecast degradation caused by two dropsonde observations on the Greenland coast is shown to arise from spreading of data by the background errors up the steep slope of Greenland. Removing boundary layer data from these dropsondes reduces the forecast degradation, but it is only a partial solution to this problem. Although only from one case study, these results suggest that observations positioned within a correlation length scale of steep orography may degrade the forecast through the anomalous upslope spreading of analysis increments along terrain-following model levels.
Resumo:
B. subtilis under certain types of media and fermentation conditions can produce surfactin, a biosurfactant which belongs to the lipopeptide class. Surfactin has exceptional surfactant activity, and exhibits some interesting biological characteristics such as antibacterial activity, antitumoral activity against ascites carcinoma cells, and a hypocholesterolemic activity that inhibits cAMP phosphodiesterase, as well as having anti-HIV properties. A cost effective recovery and purification of surfactin from fermentation broth using a two-step ultrafiltration (UF) process has been developed in order to reduce the cost of surfactin production. In this study, competitive adsorption of surfactin and proteins at the air-water interface was studied using surface pressure measurements. Small volumes of bovine serum albumin (BSA) and β-casein solutions were added to the air-water interface on a Langmuir trough and allowed to stabilise before the addition of surfactin to the subphase. Contrasting interfacial behaviour of proteins was observed with β-casein showing faster initial adsorption compared to BSA. On introduction of surfactin both proteins were displaced but a longer time were taken to displace β-casein. Overall the results showed surfactin were highly surface-active by forming a β-sheet structure at the air-water interface after reaching its critical micelle concentration (CMC) and were effective in removing both protein films, which can be explained following the orogenic mechanism. Results showed that the two-step UF process was effective to achieve high purity and fully functional surfactin.
Resumo:
Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.
Resumo:
The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.
Resumo:
In terrestrial television transmission multiple paths of various lengths can occur between the transmitter and the receiver. Such paths occur because of reflections from objects outside the direct transmission path. The multipath signals arriving at the receiver are all detected along with the intended signal causing time displaced replicas called 'ghosts' to appear on the television picture. With an increasing number of people living within built up areas, ghosting is becoming commonplace and therefore deghosting is becoming increasingly important. This thesis uses a deterministic time domain approach to deghosting, resulting in a simple solution to the problem of removing ghosts. A new video detector is presented which reduces the synchronous detector local oscillator phase error, caused by any practical size of ghost, to a lower level than has ever previously been achieved. From the new detector, dispersion of the video signal is minimised and a known closed-form time domain description of the individual ghost components within the detected video is subsequently obtained. Developed from mathematical descriptions of the detected video, a new specific deghoster filter structure is presented which is capable of removing both inphase (I) and also the phase quadrature (Q) induced ghost signals derived from the VSB operation. The new deghoster filter requires much less hardware than any previous deghoster which is capable of removing both I and Q ghost components. A new channel identification algorithm was also required and written which is based upon simple correlation techniques to find the delay and complex amplitude characteristics of individual ghosts. The result of the channel identification is then passed to the new I and Q deghoster filter for ghost cancellation. Generated from the research work performed for this thesis, five papers have been published. D
Resumo:
In the lowland agro-forest of the Sierra Madre Biodiversity Corridor (SMBC), it is considered that a native rodent species, Rattus everetti is competitively dominant over an invasive pest species, Rattus tanezumi. The main aim of this study was to assess the response of R. tanezumi following short term removal of R. everetti. We tested this experimentally by trapping and removing R. everetti from two treatment sites in agro-forest habitat on three occasions over three consecutive months. This was followed by three months of non-removal trapping. Two non-treatment sites were trapped for comparison. Following R. everetti removal, R. everetti individuals rapidly immigrated into the treatment sites and a significantly higher proportion of R. tanezumi females were in breeding condition in the treatment sites than in the non-treatment sites. The results from this study provide evidence of competition between native and invasive rodent species in complex agro-ecosystems. We were also able to demonstrate that R. everetti populations are able to recover rapidly from the non-target effects of short-term lethal control in and around agro-forest.
Resumo:
The plant defence proteins α1- and α2-purothionin (Pth) are type 1 thionins from common wheat (Triticum aestivum). These highly homologous proteins possess characteristics common amongst antimicrobial peptides and proteins, that is, cationic charge, amphiphilicity and hydrophobicity. Both α1- and α2-Pth possess the same net charge, but differ in relative hydrophobicity as determined by C18 reversed phase HPLC. Brewster angle microscopy, X-ray and neutron reflectometry, external reflection FTIR and associated surface pressure measurements demonstrated that α1 and α2-Pth interact strongly with condensed phase 1,2-dipalmitoyl-sn-glycero-3-phospho-(1'-rac-glycerol) (DPPG) monolayers at the air/liquid interface. Both thionins disrupted the in-plane structure of the anionic phospholipid monolayer, removing lipid during this process and both penetrated the lipid monolayer in addition to adsorbing as a single protein layer to the lipid head-group. However, analysis of the interfacial structures revealed that the α2-Pth showed faster disruption of the lipid film and removed more phospholipid (12%) from the interface than α1-Pth. Correlating the protein properties and lipid binding activity suggests that hydrophobicity plays a key role in the membrane lipid removal activity of thionins.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the finger-print) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing.
Resumo:
In much of the English-speaking world the lawn is the most common of all garden features. For arguably a millennium it has played a significant role in the landscape and during that period it has been inextricably linked with grasses. Nevertheless other plant species have accompanied the grasses and also been used in creating lawns. From medieval wildflowers to Victorian weeds, the plants that challenge the formal concept of the perfect lawn have journeyed with it but have until recently remained only small players within the dominion of grass. By the beginning of the 21st century, with a new environmental ethos permeating the garden, the long journey of the grassy lawn and its plant companions has led to the grass monoculture being heretically rethought: by removing both the monoculture and the grass.
Resumo:
Three experiments examined the cultural relativity of emotion recognition using the visual search task. Caucasian-English and Japanese participants were required to search for an angry or happy discrepant face target against an array of competing distractor faces. Both cultural groups performed the task with displays that consisted of Caucasian and Japanese faces in order to investigate the effects of racial congruence on emotion detection performance. Under high perceptual load conditions, both cultural groups detected the happy face more efficiently than the angry face. When perceptual load was reduced such that target detection could be achieved by feature-matching, the English group continued to show a happiness advantage in search performance that was more strongly pronounced for other race faces. Japanese participants showed search time equivalence for happy and angry targets. Experiment 3 encouraged participants to adopt a perceptual based strategy for target detection by removing the term 'emotion' from the instructions. Whilst this manipulation did not alter the happiness advantage displayed by our English group, it reinstated it for our Japanese group, who showed a detection advantage for happiness only for other race faces. The results demonstrate cultural and linguistic modifiers on the perceptual saliency of the emotional signal and provide new converging evidence from cognitive psychology for the interactionist perspective on emotional expression recognition.
Resumo:
This paper describes the techniques used to obtain sea surface temperature (SST) retrievals from the Geostationary Operational Environmental Satellite 12 (GOES-12) at the National Oceanic and Atmospheric Administration’s Office of Satellite Data Processing and Distribution. Previous SST retrieval techniques relying on channels at 11 and 12 μm are not applicable because GOES-12 lacks the latter channel. Cloud detection is performed using a Bayesian method exploiting fast-forward modeling of prior clear-sky radiances using numerical weather predictions. The basic retrieval algorithm used at nighttime is based on a linear combination of brightness temperatures at 3.9 and 11 μm. In comparison with traditional split window SSTs (using 11- and 12-μm channels), simulations show that this combination has maximum scatter when observing drier colder scenes, with a comparable overall performance. For daytime retrieval, the same algorithm is applied after estimating and removing the contribution to brightness temperature in the 3.9-μm channel from solar irradiance. The correction is based on radiative transfer simulations and comprises a parameterization for atmospheric scattering and a calculation of ocean surface reflected radiance. Potential use of the 13-μm channel for SST is shown in a simulation study: in conjunction with the 3.9-μm channel, it can reduce the retrieval error by 30%. Some validation results are shown while a companion paper by Maturi et al. shows a detailed analysis of the validation results for the operational algorithms described in this present article.
Resumo:
The work involves investigation of a type of wireless power system wherein its analysis will yield the construction of a prototype modeled as a singular technological artifact. It is through exploration of the artifact that forms the intellectual basis for not only its prototypical forms, but suggestive of variant forms not yet discovered. Through the process it is greatly clarified the role of the artifact, its most suitable application given the constraints on the delivery problem, and optimization strategies to improve it. In order to improve maturity and contribute to a body of knowledge, this document proposes research utilizing mid-field region, efficient inductive-transfer for the purposes of removing wired connections and electrical contacts. While the description seems enough to state the purpose of this work, it does not convey the compromises of having to redraw the lines of demarcation between near and far-field in the traditional method of broadcasting. Two striking scenarios are addressed in this thesis: Firstly, the mathematical explanation of wireless power is due to J.C. Maxwell's original equations, secondly, the behavior of wireless power in the circuit is due to Joseph Larmor's fundamental works on the dynamics of the field concept. A model of propagation will be presented which matches observations in experiments. A modified model of the dipole will be presented to address the phenomena observed in the theory and experiments. Two distinct sets of experiments will test the concept of single and two coupled-modes. In a more esoteric context of the zero and first-order magnetic field, the suggestion of a third coupled-mode is presented. Through the remaking of wireless power in this context, it is the intention of the author to show the reader that those things lost to history, bound to a path of complete obscurity, are once again innovative and useful ideas.
Resumo:
Abstract We present a refined parametric model for forecasting electricity demand which performed particularly well in the recent Global Energy Forecasting Competition (GEFCom 2012). We begin by motivating and presenting a simple parametric model, treating the electricity demand as a function of the temperature and day of the data. We then set out a series of refinements of the model, explaining the rationale for each, and using the competition scores to demonstrate that each successive refinement step increases the accuracy of the model’s predictions. These refinements include combining models from multiple weather stations, removing outliers from the historical data, and special treatments of public holidays.
Resumo:
Episodic explosive volcanic eruptions are a natural part of the climate system but are often omitted from atmosphere-ocean general circulation model (AOGCM) preindustrial spin-up and control experiments. This omission imposes a negative bias on ocean heat uptake in simulations of the historical period. In models of a range of complexity, we find that global-mean sea level rise due to thermal expansion during the last ∼ 150 years is consequently underestimated by 5–30 mm, which is a substantial proportion of the model mean of 50 mm in Coupled Model Intercomparison Project Phase 3 AOGCMs with anthropogenic forcing only, and is therefore important in accounting for 20th century sea level rise. We test and recommend a procedure for removing the bias.