37 resultados para Paper -- Indústria i comerç
em CentAUR: Central Archive University of Reading - UK
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Temporal and spatial patterns of soil water content affect many soil processes including evaporation, infiltration, ground water recharge, erosion and vegetation distribution. This paper describes the analysis of a soil moisture dataset comprising a combination of continuous time series of measurements at a few depths and locations, and occasional roving measurements at a large number of depths and locations. The objectives of the paper are: (i) to develop a technique for combining continuous measurements of soil water contents at a limited number of depths within a soil profile with occasional measurements at a large number of depths, to enable accurate estimation of the soil moisture vertical pattern and the integrated profile water content; and (ii) to estimate time series of soil moisture content at locations where there are just occasional soil water measurements available and some continuous records from nearby locations. The vertical interpolation technique presented here can strongly reduce errors in the estimation of profile soil water and its changes with time. On the other hand, the temporal interpolation technique is tested for different sampling strategies in space and time, and the errors generated in each case are compared.
Resumo:
This paper describes the development and validation of a novel web-based interface for the gathering of feedback from building occupants about their environmental discomfort including signs of Sick Building Syndrome (SBS). The gathering of such feedback may enable better targeting of environmental discomfort down to the individual as well as the early detection and subsequently resolution by building services of more complex issues such as SBS. The occupant's discomfort is interpreted and converted to air-conditioning system set points using Fuzzy Logic. Experimental results from a multi-zone air-conditioning test rig have been included in this paper.
Resumo:
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.
Resumo:
There are three trivial misprints in our paper.
Resumo:
Matheron's usual variogram estimator can result in unreliable variograms when data are strongly asymmetric or skewed. Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. This paper examines the effects of underlying asymmetry on the variogram and on the accuracy of prediction, and the second one examines the effects arising from outliers. Standard geostatistical texts suggest ways of dealing with underlying asymmetry; however, this is based on informed intuition rather than detailed investigation. To determine whether the methods generally used to deal with underlying asymmetry are appropriate, the effects of different coefficients of skewness on the shape of the experimental variogram and on the model parameters were investigated. Simulated annealing was used to create normally distributed random fields of different size from variograms with different nugget:sill ratios. These data were then modified to give different degrees of asymmetry and the experimental variogram was computed in each case. The effects of standard data transformations on the form of the variogram were also investigated. Cross-validation was used to assess quantitatively the performance of the different variogram models for kriging. The results showed that the shape of the variogram was affected by the degree of asymmetry, and that the effect increased as the size of data set decreased. Transformations of the data were more effective in reducing the skewness coefficient in the larger sets of data. Cross-validation confirmed that variogram models from transformed data were more suitable for kriging than were those from the raw asymmetric data. The results of this study have implications for the 'standard best practice' in dealing with asymmetry in data for geostatistical analyses. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we pledge that physically based equations should be combined with remote sensing techniques to enable a more theoretically rigorous estimation of area-average soil heat flux, G. A standard physical equation (i.e. the analytical or exact method) for the estimation of G, in combination with a simple, but theoretically derived, equation for soil thermal inertia (F), provides the basis for a more transparent and readily interpretable method for the estimation of G; without the requirement for in situ instrumentation. Moreover, such an approach ensures a more universally applicable method than those derived from purely empirical studies (employing vegetation indices and albedo, for example). Hence, a new equation for the estimation of Gamma(for homogeneous soils) is discussed in this paper which only requires knowledge of soil type, which is readily obtainable from extant soil databases and surveys, in combination with a coarse estimate of moisture status. This approach can be used to obtain area-averaged estimates of Gamma(and thus G, as explained in paper II) which is important for large-scale energy balance studies that employ aircraft or satellite data. Furthermore, this method also relaxes the instrumental demand for studies at the plot and field scale (no requirement for in situ soil temperature sensors, soil heat flux plates and/or thermal conductivity sensors). In addition, this equation can be incorporated in soil-vegetation-atmosphere-transfer models that use the force restore method to update surface temperatures (such as the well-known ISBA model), to replace the thermal inertia coefficient.
Resumo:
Climate model simulations consistently show that surface temperature over land increases more rapidly than over sea in response to greenhouse gas forcing. The enhanced warming over land is not simply a transient effect caused by the land–sea contrast in heat capacities, since it is also present in equilibrium conditions. This paper elucidates the transient adjustment processes over time scales of days to weeks of the surface and tropospheric climate in response to a doubling of CO2 and to changes in sea surface temperature (SST), imposed separately and together, using ensembles of experiments with an atmospheric general circulation model. These adjustment processes can be grouped into three stages: immediate response of the troposphere and surface processes (day 1), fast adjustment of surface processes (days 2–5), and adjustment of the whole troposphere (days 6–20). Some land surface warming in response to doubled CO2 (with unchanged SSTs) occurs immediately because of increased downward longwave radiation. Increased CO2 also leads to reduced plant stomatal resistance and hence restricted evaporation, which increases land surface warming in the first day. Rapid reductions in cloud amount lead in the next few days to increased downward shortwave radiation and further warming, which spreads upward from the surface, and by day 5 the surface and tropospheric response is statistically consistent with the equilibrium value. Land surface warming in response to imposed SST change (with unchanged CO2) is slower. Tropospheric warming is advected inland from the sea, and over land it occurs at all levels together rather than spreading upward from the surface. The atmospheric response to prescribed SST change in about 20 days is statistically consistent with the equilibrium value, and the warming is largest in the upper troposphere over both land and sea. The land surface warming involves reduction of cloud cover and increased downward shortwave radiation, as in the experiment with CO2 change, but in this case it is due to the restriction of moisture supply to the land (indicated by reduced soil moisture), whereas in the CO2 forcing experiment it is due to restricted evaporation despite increased moisture supply (indicated by increased soil moisture). The warming over land in response to SST change is greater than over the sea and is the dominant contribution to the land–sea warming contrast under enhanced CO2 forcing.
Resumo:
The lowest-wavenumber vibration of HCNO and DCNO, ν5, is known to involve a largeamplitude low-frequency anharmonic bending of the CH bond against the CNO frame. In this paper the anomalous vibrational dependence of the observed rotational constants B(v5, l5), and of the observed l-doubling interactions, is interpreted according to a simple effective vibration-rotation Hamiltonian in which the appropriate vibrational operators are averaged in an anharmonic potential surface over the normal coordinates (Q5x, Q5y). All of the data on both isotopes are interpreted according to a single potential surface having a minimum energy at a slightly bent configuration of the HCN angle ( 170°) with a maximum at the linear configuration about 2 cm−1 higher. The other coefficients in the Hamiltonian are also interpreted in terms of the structure and the harmonic and anharmonic force fields; the substitution structure at the “hypothetical linear configuration” determined in this way gives a CH bond length of 1.060 Å, in contrast to the value 1.027 Å determined from the ground-state rotational constants. We also discuss the difficulties in rationalizing our effective Hamiltonian in terms of more fundamental theory, as well as the success and limitations of its use in practice.
Resumo:
This paper reviews Bayesian procedures for phase 1 dose-escalation studies and compares different dose schedules and cohort sizes. The methodology described is motivated by the situation of phase 1 dose-escalation studiesin oncology, that is, a single dose administered to each patient, with a single binary response ("toxicity"' or "no toxicity") observed. It is likely that a wider range of applications of the methodology is possible. In this paper, results from 10000-fold simulation runs conducted using the software package Bayesian ADEPT are presented. Four designs were compared under six scenarios. The simulation results indicate that there are slight advantages of having more dose levels and smaller cohort sizes.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.
Resumo:
This paper uses the social model of disability to examine visually impaired children's experiences of their housing and neighbourhoods and finds that they did not experience any significant problems with the design of them. The source of their problems was within these environments, and was caused by factors such as the intensity of movement, for example, from flows of traffic. We conclude by discussing the social policy implications of these findings.
Resumo:
The perceived wisdom about thin sheet fracture is that (i) the crack propagates under mixed mode I & III giving rise to a slant through-thickness fracture profile and (ii) the fracture toughness remains constant at low thickness and eventually decreases with increasing thickness. In the present study, fracture tests performed on thin DENT plates of various thicknesses made of stainless steel, mild steel, 6082-O and NS4 aluminium alloys, brass, bronze, lead, and zinc systematically exhibit (i) mode I “bath-tub”, i.e. “cup & cup”, fracture profiles with limited shear lips and significant localized necking (more than 50% thickness reduction), (ii) a fracture toughness that linearly increases with increasing thickness (in the range of 0.5–5 mm). The different contributions to the work expended during fracture of these materials are separated based on dimensional considerations. The paper emphasises the two parts of the work spent in the fracture process zone: the necking work and the “fracture” work. Experiments show that, as expected, the work of necking per unit area linearly increases with thickness. For a typical thickness of 1 mm, both fracture and necking contributions have the same order of magnitude in most of the metals investigated. A model is developed in order to independently evaluate the work of necking, which successfully predicts the experimental values. Furthermore, it enables the fracture energy to be derived from tests performed with only one specimen thickness. In a second modelling step, the work of fracture is computed using an enhanced void growth model valid in the quasi plane stress regime. The fracture energy varies linearly with the yield stress and void spacing and is a strong function of the hardening exponent and initial void volume fraction. The coupling of the two models allows the relative contributions of necking versus fracture to be quantified with respect to (i) the two length scales involved in this problem, i.e. the void spacing and the plate thickness, and (ii) the flow properties of the material. Each term can dominate depending on the properties of the material which explains the different behaviours reported in the literature about thin plate fracture toughness and its dependence with thickness.
Resumo:
Several studies have highlighted the importance of the cooling period in oil absorption in deep-fat fried products. Specifically, it has been established that the largest proportion of oil which ends up into the food, is sucked into the porous crust region after the fried product is removed from the oil bath, stressing the importance of this time interval. The main objective of this paper was to develop a predictive mechanistic model that can be used to understand the principles behind post-frying cooling oil absorption kinetics, which can also help identifying the key parameters that affect the final oil intake by the fried product. The model was developed for two different geometries, an infinite slab and an infinite cylinder, and was divided into two main sub-models, one describing the immersion frying period itself and the other describing the post-frying cooling period. The immersion frying period was described by a transient moving-front model that considered the movement of the crust/core interface, whereas post-frying cooling oil absorption was considered to be a pressure driven flow mediated by capillary forces. A key element in the model was the hypothesis that oil suction would only begin once a positive pressure driving force had developed. The mechanistic model was based on measurable physical and thermal properties, and process parameters with no need of empirical data fitting, and can be used to study oil absorption in any deep-fat fried product that satisfies the assumptions made.