36 resultados para Programação offline
em CentAUR: Central Archive University of Reading - UK
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
This paper examines to what extent crops and their environment should be viewed as a coupled system. Crop impact assessments currently use climate model output offline to drive process-based crop models. However, in regions where local climate is sensitive to land surface conditions more consistent assessments may be produced with the crop model embedded within the land surface scheme of the climate model. Using a recently developed coupled crop–climate model, the sensitivity of local climate, in particular climate variability, to climatically forced variations in crop growth throughout the tropics is examined by comparing climates simulated with dynamic and prescribed seasonal growth of croplands. Interannual variations in land surface properties associated with variations in crop growth and development were found to have significant impacts on near-surface fluxes and climate; for example, growing season temperature variability was increased by up to 40% by the inclusion of dynamic crops. The impact was greatest in dry years where the response of crop growth to soil moisture deficits enhanced the associated warming via a reduction in evaporation. Parts of the Sahel, India, Brazil, and southern Africa were identified where local climate variability is sensitive to variations in crop growth, and where crop yield is sensitive to variations in surface temperature. Therefore, offline seasonal forecasting methodologies in these regions may underestimate crop yield variability. The inclusion of dynamic crops also altered the mean climate of the humid tropics, highlighting the importance of including dynamical vegetation within climate models.
Resumo:
The Joint UK Land Environmental Simulator (JULES) was run offline to investigate the sensitivity of land surface type changes over South Africa. Sensitivity tests were made in idealised experiments where the actual land surface cover is replaced by a single homogeneous surface type. The vegetation surface types on which some of the experiments were made are static. Experimental tests were evaluated against the control. The model results show among others that the change of the surface cover results in changes of other variables such as soil moisture, albedo, net radiation and etc. These changes are also visible in the spin up process. The model shows different surfaces spinning up at different cycles. Because JULES is the land surface model of Unified Model, the results could be more physically meaningful if it is coupled to the Unified Model.
Resumo:
The paper describes the implementation of an offline, low-cost Brain Computer Interface (BCI) alternative to more expensive commercial models. Using inexpensive general purpose clinical EEG acquisition hardware (Truscan32, Deymed Diagnostic) as the base unit, a synchronisation module was constructed to allow the EEG hardware to be operated precisely in time to allow for recording of automatically time stamped EEG signals. The synchronising module allows the EEG recordings to be aligned in stimulus time locked fashion for further processing by the classifier to establish the class of the stimulus, sample by sample. This allows for the acquisition of signals from the subject’s brain for the goal oriented BCI application based on the oddball paradigm. An appropriate graphical user interface (GUI) was constructed and implemented as the method to elicit the required responses (in this case Event Related Potentials or ERPs) from the subject.
Resumo:
Recent literature has described a “transition zone” between the average top of deep convection in the Tropics and the stratosphere. Here transport across this zone is investigated using an offline trajectory model. Particles were advected by the resolved winds from the European Centre for Medium-Range Weather Forecasts reanalyses. For each boreal winter clusters of particles were released in the upper troposphere over the four main regions of tropical deep convection (Indonesia, central Pacific, South America, and Africa). Most particles remain in the troposphere, descending on average for every cluster. The horizontal components of 5-day trajectories are strongly influenced by the El Niño–Southern Oscillation (ENSO), but the Lagrangian average descent does not have a clear ENSO signature. Tropopause crossing locations are first identified by recording events when trajectories from the same release regions cross the World Meteorological Organization lapse rate tropopause. Most crossing events occur 5–15 days after release, and 30-day trajectories are sufficiently long to estimate crossing number densities. In a further two experiments slight excursions across the lapse rate tropopause are differentiated from the drift deeper into the stratosphere by defining the “tropopause zone” as a layer bounded by the average potential temperature of the lapse rate tropopause and the profile temperature minimum. Transport upward across this zone is studied using forward trajectories released from the lower bound and back trajectories arriving at the upper bound. Histograms of particle potential temperature (θ) show marked differences between the transition zone, where there is a slow spread in θ values about a peak that shifts slowly upward, and the troposphere below 350 K. There forward trajectories experience slow radiative cooling interspersed with bursts of convective heating resulting in a well-mixed distribution. In contrast θ histograms for back trajectories arriving in the stratosphere have two distinct peaks just above 300 and 350 K, indicating the sharp change from rapid convective heating in the well-mixed troposphere to slow ascent in the transition zone. Although trajectories slowly cross the tropopause zone throughout the Tropics, all three experiments show that most trajectories reaching the stratosphere from the lower troposphere within 30 days do so over the west Pacific warm pool. This preferred location moves about 30°–50° farther east in an El Niño year (1982/83) and about 30° farther west in a La Niña year (1988/89). These results could have important implications for upper-troposphere–lower-stratosphere pollution and chemistry studies.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
Moist convection is well known to be generally more intense over continental than maritime regions, with larger updraft velocities, graupel, and lightning production. This study explores the transition from maritime to continental convection by comparing the trends in Tropical Rainfall Measuring Mission (TRMM) radar and microwave (37 and 85 GHz) observations over islands of increasing size to those simulated by a cloud-resolving model. The observed storms were essentially maritime over islands of <100 km2 and continental over islands >10 000 km2, with a gradual transition in between. Equivalent radar and microwave quantities were simulated from cloud-resolving runs of the Weather Research and Forecasting model via offline radiation codes. The model configuration was idealized, with islands represented by regions of uniform surface heat flux without orography, using a range of initial sounding conditions without strong horizontal winds or aerosols. Simulated storm strength varied with initial sounding, as expected, but also increased sharply with island size in a manner similar to observations. Stronger simulated storms were associated with higher concentrations of large hydrometeors. Although biases varied with different ice microphysical schemes, the trend was similar for all three schemes tested and was also seen in 2D and 3D model configurations. The successful reproduction of the trend with such idealized forcing supports previous suggestions that mesoscale variation in surface heating—rather than any difference in humidity, aerosol, or other aspects of the atmospheric state—is the main reason that convection is more intense over continents and large islands than over oceans. Some dynamical storm aspects, notably the peak rainfall and minimum surface pressure low, were more sensitive to surface forcing than to the atmospheric sounding or ice scheme. Large hydrometeor concentrations and simulated microwave and radar signatures, however, were at least as sensitive to initial humidity levels as to surface forcing and were more sensitive to the ice scheme. Issues with running the TRMM simulator on 2D simulations are discussed, but they appear to be less serious than sensitivities to model microphysics, which were similar in 2D and 3D. This supports the further use of 2D simulations to economically explore modeling uncertainties.
Resumo:
Purpose - The role of affective states in consumer behaviour is well established. However, no study to date has empirically examined online affective states as a basis for constructing typologies of internet users and for assessing the invariance of clusters across national cultures. Design/methodology/approach - Four focus groups with internet users were carried out to adapt a set of affective states identified from the literature to the online environment. An online survey was then designed to collect data from internet users in four Western and four East Asian countries. Findings - Based on a cluster analysis, six cross-national market segments are identified and labelled "Positive Online Affectivists", "Offline Affectivists", "On/Off-line Negative Affectivists", "Online Affectivists", "Indistinguishable Affectivists", and "Negative Offline Affectivists". The resulting clusters discriminate on the basis of national culture, gender, working status and perceptions towards online brands. Practical implications - Marketers may use this typology to segment internet users in order to predict their perceptions towards online brands. Also, a standardised approach to e-marketing is not recommended on the basis of affective state-based segmentation. Originality/value - This is the first study proposing affective state-based typologies of internet users using comparable samples from four Western and four East Asian countries.
Resumo:
This paper approaches the subject of brand equity measurement on and offline. The existing body of research knowledge on brand equity measurement has derived from classical contexts; however, the majority of today's brands prosper simultaneously online and offline. Since branding on the Web needs to address the unique characteristics of computer-mediated environments, it was posited that classical measures of brand equity were inadequate for this category of brands. Aaker's guidelines for building a brand equity measurement system were thus followed and his brand equity ten was employed as a point of departure. The main challenge was complementing traditional measures of brand equity with new measures pertinent to the Web. Following 16 semi-structured interviews with experts, ten additional measures were identified.
Resumo:
Cannabis sativa has been associated with contradictory effects upon seizure states despite its medicinal use by numerous people with epilepsy. We have recently shown that the phytocannabinoid cannabidiol (CBD) reduces seizure severity and lethality in the well-established in vivo model of pentylenetetrazoleinduced generalised seizures, suggesting that earlier, small-scale clinical trials examining CBD effects in people with epilepsy warrant renewed attention. Here, we report the effects of pure CBD (1, 10 and 100 mg/kg) in two other established rodent seizure models, the acute pilocarpine model of temporal lobe seizure and the penicillin model of partial seizure. Seizure activity was video recorded and scored offline using model-specific seizure severity scales. In the pilocarpine model CBD (all doses) significantly reduced the percentage of animals experiencing the most severe seizures. In the penicillin model, CBD (�10 mg/kg) significantly decreased the percentage mortality as a result of seizures; CBD (all doses) also decreased the percentage of animals experiencing the most severe tonic–clonic seizures. These results extend the anticonvulsant profile of CBD; when combined with a reported absence of psychoactive effects, this evidence strongly supports CBD as a therapeutic candidate for a diverse range of human epilepsies.
Resumo:
A situation assessment uses reports from sensors to produce hypotheses about a situation at a level of aggregation that is of direct interest to a military commander. A low level of aggregation could mean forming tracks from reports, which is well documented in the tracking literature as track initiation and data association. In this paper there is also discussion on higher level aggregation; assessing the membership of tracks to larger groups. Ideas used in joint tracking and identification are extended, using multi-entity Bayesian networks to model a number of static variables, of which the identity of a target is one. For higher level aggregation a scheme for hypothesis management is required. It is shown how an offline clustering of vehicles can be reduced to an assignment problem.
Resumo:
The concentrations of dissolved noble gases in water are widely used as a climate proxy to determine noble gas temperatures (NGTs); i.e., the temperature of the water when gas exchange last occurred. In this paper we make a step forward to apply this principle to fluid inclusions in stalagmites in order to reconstruct the cave temperature prevailing at the time when the inclusion was formed. We present an analytical protocol that allows us accurately to determine noble gas concentrations and isotope ratios in stalagmites, and which includes a precise manometrical determination of the mass of water liberated from fluid inclusions. Most important for NGT determination is to reduce the amount of noble gases liberated from air inclusions, as they mask the temperature-dependent noble gas signal from the water inclusions. We demonstrate that offline pre-crushing in air to subsequently extract noble gases and water from the samples by heating is appropriate to separate gases released from air and water inclusions. Although a large fraction of recent samples analysed by this technique yields NGTs close to present-day cave temperatures, the interpretation of measured noble gas concentrations in terms of NGTs is not yet feasible using the available least squares fitting models. This is because the noble gas concentrations in stalagmites are not only composed of the two components air and air saturated water (ASW), which these models are able to account for. The observed enrichments in heavy noble gases are interpreted as being due to adsorption during sample preparation in air, whereas the excess in He and Ne is interpreted as an additional noble gas component that is bound in voids in the crystallographic structure of the calcite crystals. As a consequence of our study's findings, NGTs will have to be determined in the future using the concentrations of Ar, Kr and Xe only. This needs to be achieved by further optimizing the sample preparation to minimize atmospheric contamination and to further reduce the amount of noble gases released from air inclusions.
Resumo:
In the stratosphere, chemical tracers are drawn systematically from the equator to the pole. This observed Brewer–Dobson circulation is driven by wave drag, which in the stratosphere arises mainly from the breaking and dissipation of planetary-scale Rossby waves. While the overall sense of the circulation follows from fundamental physical principles, a quantitative theoretical understanding of the connection between wave drag and Lagrangian transport is limited to linear, small-amplitude waves. However, planetary waves in the stratosphere generally grow to a large amplitude and break in a strongly nonlinear fashion. This paper addresses the connection between stratospheric wave drag and Lagrangian transport in the presence of strong nonlinearity, using a mechanistic three-dimensional primitive equations model together with offline particle advection. Attention is deliberately focused on a weak forcing regime, such that sudden warmings do not occur and a quasi-steady state is reached, in order to examine this question in the cleanest possible context. Wave drag is directly linked to the transformed Eulerian mean (TEM) circulation, which is often used as a surrogate for mean Lagrangian motion. The results show that the correspondence between the TEM and mean Lagrangian velocities is quantitatively excellent in regions of linear, nonbreaking waves (i.e., outside the surf zone), where streamlines are not closed. Within the surf zone, where streamlines are closed and meridional particle displacements are large, the agreement between the vertical components of the two velocity fields is still remarkably good, especially wherever particle paths are coherent so that diabatic dispersion is minimized. However, in this region the meridional mean Lagrangian velocity bears little relation to the meridional TEM velocity, and reflects more the kinematics of mixing within and across the edges of the surf zone. The results from the mechanistic model are compared with those from the Canadian Middle Atmosphere Model to test the robustness of the conclusions.
Resumo:
Crystallization must occur in honey in order to produce set or creamed honey; however, the process must occur in a controlled manner in order to obtain an acceptable product. As a consequence, reliable methods are needed to measure the crystal content of honey (φ expressed as kg crystal per kg honey), which can also be implemented with relative ease in industrial production facilities. Unfortunately, suitable methods do not currently exist. This article reports on the development of 2 independent offline methods to measure the crystal content in honey based on differential scanning calorimetry and high-performance liquid chromatography. The 2 methods gave highly consistent results on the basis of paired t-test involving 143 experimental points (P > 0.05, r**2 = 0.99). The crystal content also correlated with the relative viscosity, defined as the ratio of the viscosity of crystal containing honey to that of the same honey when all crystals are dissolved, giving the following correlation: μr = 1 + 1398.8∅**2.318. This correlation can be used to estimate the crystal content of honey in industrial production facilities. The crystal growth rate at a temperature of 14 ◦C—the normal crystallization temperature used in practice—was linear, and the growth rate also increased with the total glucose content in the honey.
Resumo:
The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.