35 resultados para Lead based paint
em CentAUR: Central Archive University of Reading - UK
Resumo:
Terahertz (THz) radiation is being developed as a tool for the analysis of cultural heritage, and due to recent advances in technology is now available commercially in systems which can be deployed for field analysis. The radiation is capable of penetrating up to one centimetre of wall plaster and is delivered in ultrafast pulses which are reflected from layers within this region. The technique is non-contact, non-invasive and non-destructive. While sub-surface radar is able to penetrate over a metre of wall plaster, producing details of internal structures, infrared and ultraviolet techniques produce information about the surface layers of wall plaster. THz radiation is able to provide information about the interim region of up to approximately one centimetre into the wall surface. Data from Chartres Cathedral, France, Riga Dome Cathedral, Latvia, and Chartreuse du Val de Bénédiction, France is presented each with different research questions. The presence of sub-surface paint layers was expected from documentary evidence, dating to the 13th Century, at Chartres Cathedral. In contrast, at the Riga Dome Cathedral surface painting had been obscured as recently as 1941 during the Russian occupation of Latvia using white lead-based paint. In the 13th Century, wall paintings at the Chapel of the Frescos, Chartreuse du Val de Benediction in Villeneuve les Avignon were constructed using sinopia under-painting on plaster covering uneven stonework.. This paper compares and contrasts the ability of THz radiation to provide information about sub-surface features in churches and Cathedrals across Europe by analysing depth based profiles gained from the reflected signal. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Resumo:
The ability of narrow bandpass filters to discriminate wavelengths between closely-separated gas absorption lines is crucial in many areas of infrared spectroscopy. As improvements to the sensitivity of infrared detectors enables operation in uncontrolled high-temperature environments, this imposes demands on the explicit bandpass design to provide temperature-invariant behavior. The unique negative temperature coefficient (dn/dT<0) of Lead-based (Pb) salts, in combination with dielectric materials enable bandpass filters with exclusive immunity to shifts in wavelength with temperature. This paper presents the results of an investigation into the interdependence between multilayer bandpass design and optical materials together with a review on invariance at elevated temperatures.
Resumo:
Growing ivy around buildings has benefits. However, ivy potentially damages buildings which limit its use. Options for preventing ivy attachment were investigated to provide ivy management alternatives. Indoor and outdoor experiments were conducted, where metals (Cu, Zn) and anti-graffiti paints were applied to model wall panels. Metal treatments, in both indoor and outdoor experiments, fully prevented ivy attachment. For Hedera helix, silane-based anti-graffiti paint prevented attachment in the laboratory and required under half the peak detachment force necessary to detach the control in the outdoor experiment. In conclusion, metals and silane-based paint are management possibilities for ivy attachment around buildings.
Resumo:
The “butterfly effect” is a popularly known paradigm; commonly it is said that when a butterfly flaps its wings in Brazil, it may cause a tornado in Texas. This essentially describes how weather forecasts can be extremely senstive to small changes in the given atmospheric data, or initial conditions, used in computer model simulations. In 1961 Edward Lorenz found, when running a weather model, that small changes in the initial conditions given to the model can, over time, lead to entriely different forecasts (Lorenz, 1963). This discovery highlights one of the major challenges in modern weather forecasting; that is to provide the computer model with the most accurately specified initial conditions possible. A process known as data assimilation seeks to minimize the errors in the given initial conditions and was, in 1911, described by Bjerkness as “the ultimate problem in meteorology” (Bjerkness, 1911).
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Genetic association analyses of family-based studies with ordered categorical phenotypes are often conducted using methods either for quantitative or for binary traits, which can lead to suboptimal analyses. Here we present an alternative likelihood-based method of analysis for single nucleotide polymorphism (SNP) genotypes and ordered categorical phenotypes in nuclear families of any size. Our approach, which extends our previous work for binary phenotypes, permits straightforward inclusion of covariate, gene-gene and gene-covariate interaction terms in the likelihood, incorporates a simple model for ascertainment and allows for family-specific effects in the hypothesis test. Additionally, our method produces interpretable parameter estimates and valid confidence intervals. We assess the proposed method using simulated data, and apply it to a polymorphism in the c-reactive protein (CRP) gene typed in families collected to investigate human systemic lupus erythematosus. By including sex interactions in the analysis, we show that the polymorphism is associated with anti-nuclear autoantibody (ANA) production in females, while there appears to be no effect in males.
Resumo:
Microsatellites are widely used in genetic analyses, many of which require reliable estimates of microsatellite mutation rates, yet the factors determining mutation rates are uncertain. The most straightforward and conclusive method by which to study mutation is direct observation of allele transmissions in parent-child pairs, and studies of this type suggest a positive, possibly exponential, relationship between mutation rate and allele size, together with a bias toward length increase. Except for microsatellites on the Y chromosome, however, previous analyses have not made full use of available data and may have introduced bias: mutations have been identified only where child genotypes could not be generated by transmission from parents' genotypes, so that the probability that a mutation is detected depends on the distribution of allele lengths and varies with allele length. We introduce a likelihood-based approach that has two key advantages over existing methods. First, we can make formal comparisons between competing models of microsatellite evolution; second, we obtain asymptotically unbiased and efficient parameter estimates. Application to data composed of 118,866 parent-offspring transmissions of AC microsatellites supports the hypothesis that mutation rate increases exponentially with microsatellite length, with a suggestion that contractions become more likely than expansions as length increases. This would lead to a stationary distribution for allele length maintained by mutational balance. There is no evidence that contractions and expansions differ in their step size distributions.
Resumo:
Clients and contractors need to be aware of the project’s legal environment because the viability of a procurement strategy can be vitiated by legal rules. This is particularly true regarding Performance-Based Contracting (PBC) whose viability may be threatened by rules of property law: while the PBC concept does not require that the contractor transfers the ownership in the building materials used to the client, the rules of property law often lead to an automatic transfer of ownership. But does the legal environment really render PBC unfeasible? In particular, is PBC unfeasible because contractors lose their materials as assets? These questions need to be answered with respect to the applicable property law. As a case study, English property law has been chosen. Under English law, the rule which governs the automatic transfer of ownership is called quicquid plantatur solo, solo credit (whatever is fixed to the soil belongs to the soil). An analysis of this rule reveals that not all materials which are affixed to land become part of the land. This fate only occurs in relation to materials which have been affixed with the intention of permanently improving the land. Five fictitious PBC cases have been considered in terms of the legal status of the materials involved, and several subsequent legal questions have been addressed. The results suggest that English law does actually threaten the feasibility of PBC in some cases. However, it is also shown that the law provides means to circumvent the unwanted results which flow from the rules of property law. In particular, contractors who are interested in keeping their materials as assets can insist on agreeing a property right in the client’s land, i.e. a contractor’s lien. Therefore, the outcome is that English property law does not render the implementation of the PBC concept unfeasible. At a broader level, the results contribute to the theoretical framework of PBC as an increasingly used procurement strategy.
Resumo:
Increasing legislation has steadily been introduced throughout the world to restrict the use of heavy metals, particularly cadmium (Cd) and lead (Pb) in high temperature pigments, ceramics, and optoelectronic material applications. Removal of cadmium from thin-film optical and semiconductor device applications has been hampered by the absence of viable alternatives that exhibit similar properties with stability and durability. We describe a range of tin-based compounds that have been deposited and characterized in terms of their optical and mechanical properties and compare them with existing cadmium-based films that currently find widespread use in the optoelectronic and semiconductor industries. (c) 2008 Optical Society of America.
Resumo:
There is a growing concern in reducing greenhouse gas emissions all over the world. The U.K. has set 34% target reduction of emission before 2020 and 80% before 2050 compared to 1990 recently in Post Copenhagen Report on Climate Change. In practise, Life Cycle Cost (LCC) and Life Cycle Assessment (LCA) tools have been introduced to construction industry in order to achieve this such as. However, there is clear a disconnection between costs and environmental impacts over the life cycle of a built asset when using these two tools. Besides, the changes in Information and Communication Technologies (ICTs) lead to a change in the way information is represented, in particular, information is being fed more easily and distributed more quickly to different stakeholders by the use of tool such as the Building Information Modelling (BIM), with little consideration on incorporating LCC and LCA and their maximised usage within the BIM environment. The aim of this paper is to propose the development of a model-based LCC and LCA tool in order to provide sustainable building design decisions for clients, architects and quantity surveyors, by then an optimal investment decision can be made by studying the trade-off between costs and environmental impacts. An application framework is also proposed finally as the future work that shows how the proposed model can be incorporated into the BIM environment in practise.
Resumo:
One of the most common Demand Side Management programs consists of Time-of-Use (TOU) tariffs, where consumers are charged differently depending on the time of the day when they make use of energy services. This paper assesses the impacts of TOU tariffs on a dataset of residential users from the Province of Trento in Northern Italy in terms of changes in electricity demand, price savings, peak load shifting and peak electricity demand at substation level. Findings highlight that TOU tariffs bring about higher average electricity consumption and lower payments by consumers. A significant level of load shifting takes place for morning peaks. However, issues with evening peaks are not resolved. Finally, TOU tariffs lead to increases in electricity demand for substations at peak time.
Resumo:
Water-table reconstructions from Holocene peatlands are increasingly being used as indicators of terrestrial palaeoclimate in many regions of the world. However, the links between peatland water tables, climate, and long-term peatland development are poorly understood. Here we use a combination of high-resolution proxy climate data and a model of long-term peatland development to examine the relationship between rapid hydrological fluctuations in peatlands and climatic forcing. We show that changes in water-table depth can occur independently of climate forcing. Ecohydrological feedbacks inherent in peatland development can lead to a degree of homeostasis that partially disconnects peatland water-table behaviour from external climatic influences. We conclude by suggesting that further work needs to be done before peat-based climate reconstructions can be used to test climate models.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.