912 resultados para Point-to-point speed enforcement
Resumo:
Khartoum like many cities in least developing countries (LDCs) still witnesses huge influx of people. Accommodation of the new comers leads to encroachment on the cultivation land leads to sprawl expansion of Greater Khartoum. The city expanded in diameter from 16.8 km in 1955 to 802.5 km in 1998. Most of this horizontal expansion was residential. In 2008 Khartoum accommodated 29% of the urban population of Sudan. Today Khartoum is considered as one of 43 major cities in Africa that accommodates more than 1 million inhabitants. Most of new comers live in the outskirts of the city e.g. Dar El-Salam and Mayo neighbourhoods. The majority of those new comers built their houses especially the walls from mud, wood, straw and sacks. Selection of building materials usually depends on its price regardless of the environmental impact, quality, thermal performance and life of the material. Most of the time, this results in increasing the cost with variables of impacts over the environment during the life of the building. Therefore, consideration of the environmental impacts, social impacts and economic impacts is crucial in the selection of any building material. Decreasing such impacts could lead to more sustainable housing. Comparing the sustainability of the available wall building materials for low cost housing in Khartoum is carried out through the life cycle assessment (LCA) technique. The purpose of this paper is to compare the most available local building materials for walls for the urban poor of Khartoum from a sustainability point of view by going through the manufacturing of the materials, the use of these materials and then the disposal of the materials after their life comes to an end. Findings reveal that traditional red bricks couldn’t be considered as a sustainable wall building material that will draw the future of the low cost housing in Greater Khartoum. On the other hand, results of the comparison lead to draw attention to the wide range of the soil techniques and to its potentials to be a promising sustainable wall material for urban low cost housing in Khartoum.
Resumo:
A simple storm loss model is applied to an ensemble of ECHAM5/MPI-OM1 GCM simulations in order to estimate changes of insured loss potentials over Europe in the 21st century. Losses are computed based on the daily maximum wind speed for each grid point. The calibration of the loss model is performed using wind data from the ERA40-Reanalysis and German loss data. The obtained annual losses for the present climate conditions (20C, three realisations) reproduce the statistical features of the historical insurance loss data for Germany. The climate change experiments correspond to the SRES-Scenarios A1B and A2, and for each of them three realisations are considered. On average, insured loss potentials increase for all analysed European regions at the end of the 21st century. Changes are largest for Germany and France, and lowest for Portugal/Spain. Additionally, the spread between the single realisations is large, ranging e.g. for Germany from −4% to +43% in terms of mean annual loss. Moreover, almost all simulations show an increasing interannual variability of storm damage. This assessment is even more pronounced if no adaptation of building structure to climate change is considered. The increased loss potentials are linked with enhanced values for the high percentiles of surface wind maxima over Western and Central Europe, which in turn are associated with an enhanced number and increased intensity of extreme cyclones over the British Isles and the North Sea.
Resumo:
Measurements of atmospheric corona currents have been made for over 100 years to indicate the atmospheric electric field. Corona currents vary substantially, in polarity and in magnitude. The instrument described here uses a sharp point sensor connected to a temperature compensated bi-polar logarithmic current amplifier. Calibrations over a range of currents from ±10 fA to ±3 μA and across ±20 ◦C show it has an excellent logarithmic response over six orders of magnitude from 1 pA to 1 μA in both polarities for the range of atmospheric temperatures likely to be encountered in the southern UK. Comparison with atmospheric electric field measurements during disturbed weather confirms that bipolar electric fields induce corona currents of corresponding sign, with magnitudes ∼0.5 μA.
Resumo:
The endocannabinoid system (ECS) was only 'discovered' in the 1990s. Since then, many new ligands have been identified, as well as many new intracellular targets--ranging from the PPARs, to mitochondria, to lipid rafts. It was thought that blocking the CB-1 receptor might reverse obesity and the metabolic syndrome. This was based on the idea that the ECS was dysfunctional in these conditions. This has met with limited success. The reason may be that the ECS is a homeostatic system, which integrates energy seeking and storage behaviour with resistance to oxidative stress. It could be viewed as having thrifty actions. Thriftiness is an innate property of life, which is programmed to a set point by both environment and genetics, resulting in an epigenotype perfectly adapted to its environment. This thrifty set point can be modulated by hormetic stimuli, such as exercise, cold and plant micronutrients. We have proposed that the physiological and protective insulin resistance that underlies thriftiness encapsulates something called 'redox thriftiness', whereby insulin resistance is determined by the ability to resist oxidative stress. Modern man has removed most hormetic stimuli and replaced them with a calorific sedentary lifestyle, leading to increased risk of metabolic inflexibility. We suggest that there is a tipping point where lipotoxicity in adipose and hepatic cells induces mild inflammation, which switches thrifty insulin resistance to inflammation-driven insulin resistance. To understand this, we propose that the metabolic syndrome could be seen from the viewpoint of the ECS, the mitochondrion and the FOXO group of transcription factors. FOXO has many thrifty actions, including increasing insulin resistance and appetite, suppressing oxidative stress and shifting the organism towards using fatty acids. In concert with factors such as PGC-1, they also modify mitochondrial function and biogenesis. Hence, the ECS and FOXO may interact at many points; one of which may be via intracellular redox signalling. As cannabinoids have been shown to modulate reactive oxygen species production, it is possible that they can upregulate anti-oxidant defences. This suggests they may have an 'endohormetic' signalling function. The tipping point into the metabolic syndrome may be the result of a chronic lack of hormetic stimuli (in particular, physical activity), and thus, stimulus for PGC-1, with a resultant reduction in mitochondrial function and a reduced lipid capacitance. This, in the context of a positive calorie environment, will result in increased visceral adipose tissue volume, abnormal ectopic fat content and systemic inflammation. This would worsen the inflammatory-driven pathological insulin resistance and inability to deal with lipids. The resultant oxidative stress may therefore drive a compensatory anti-oxidative response epitomised by the ECS and FOXO. Thus, although blocking the ECS (e.g. via rimonabant) may induce temporary weight loss, it may compromise long-term stress resistance. Clues about how to modulate the system more safely are emerging from observations that some polyphenols, such as resveratrol and possibly, some phytocannabinoids, can modulate mitochondrial function and might improve resistance to a modern lifestyle.
Resumo:
Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.
Resumo:
Using monthly time-series data 1999-2013, the paper shows that markets for agricultural commodities provide a yardstick for real purchasing power, and thus a reference point for the real value of fiat currencies. The daily need for each adult to consume about 2800 food calories is universal; data from FAO food balance sheets confirm that the world basket of food consumed daily is non-volatile in comparison to the volatility of currency exchange rates, and so the replacement cost of food consumed provides a consistent indicator of economic value. Food commodities are storable for short periods, but ultimately perishable, and this exerts continual pressure for markets to clear in the short term; moreover, food calories can be obtained from a very large range of foodstuffs, and so most households are able to use arbitrage to select a near optimal weighting of quantities purchased. The paper proposes an original method to enable a standard of value to be established, definable in physical units on the basis of actual worldwide consumption of food goods, with an illustration of the method.
Resumo:
IEEE 754 floating-point arithmetic is widely used in modern, general-purpose computers. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. Modifying the IEEE arithmetic so that it uses transreal arithmetic has a number of advantages. It removes one redundant binade from IEEE floating-point objects, doubling the numerical precision of the arithmetic. It removes eight redundant, relational,floating-point operations and removes the redundant total order operation. It replaces the non-reflexive, floating-point, equality operator with a reflexive equality operator and it indicates that some of the exceptions may be removed as redundant { subject to issues of backward compatibility and transient future compatibility as programmers migrate to the transreal paradigm.
Resumo:
The IEEE 754 standard for oating-point arithmetic is widely used in computing. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. The IEEE infinities are said to have the behaviour of limits. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. We elucidate the transreal tangent and extend real limits to transreal limits. Arguing from this firm foundation, we maintain that there are three category errors in the IEEE 754 standard. Firstly the claim that IEEE infinities are limits of real arithmetic confuses limiting processes with arithmetic. Secondly a defence of IEEE negative zero confuses the limit of a function with the value of a function. Thirdly the definition of IEEE NaNs confuses undefined with unordered. Furthermore we prove that the tangent function, with the infinities given by geometrical con- struction, has a period of an entire rotation, not half a rotation as is commonly understood. This illustrates a category error, confusing the limit with the value of a function, in an important area of applied mathe- matics { trigonometry. We brie y consider the wider implications of this category error. Another paper proposes transreal arithmetic as a basis for floating- point arithmetic; here we take the profound step of proposing transreal arithmetic as a replacement for real arithmetic to remove the possibility of certain category errors in mathematics. Thus we propose both theo- retical and practical advantages of transmathematics. In particular we argue that implementing transreal analysis in trans- floating-point arith- metic would extend the coverage, accuracy and reliability of almost all computer programs that exploit real analysis { essentially all programs in science and engineering and many in finance, medicine and other socially beneficial applications.
Resumo:
A statistical–dynamical downscaling (SDD) approach for the regionalization of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily mean sea level pressure fields with the central point being located over Germany. Seventy-seven weather classes based on the associated CWT and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamically downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different data sets, the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes, results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate, SDD is able to simulate realistic PDFs of 10-m wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD-simulated Eout. In terms of decadal hindcasts, results of SDD are similar to DD-simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout time series of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the Earth System Model of the Max Planck Institute (MPI-ESM) decadal prediction system. Long-term climate change projections in Special Report on Emission Scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to the results of other studies using DD methods, with increasing Eout over northern Europe and a negative trend over southern Europe. Despite some biases, it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
Resumo:
The XWS (eXtreme WindStorms) catalogue consists of storm tracks and model-generated maximum 3 s wind-gust footprints for 50 of the most extreme winter windstorms to hit Europe in the period 1979–2012. The catalogue is intended to be a valuable resource for both academia and industries such as (re)insurance, for example allowing users to characterise extreme European storms, and validate climate and catastrophe models. Several storm severity indices were investigated to find which could best represent a list of known high-loss (severe) storms. The best-performing index was Sft, which is a combination of storm area calculated from the storm footprint and maximum 925 hPa wind speed from the storm track. All the listed severe storms are included in the catalogue, and the remaining ones were selected using Sft. A comparison of the model footprint to station observations revealed that storms were generally well represented, although for some storms the highest gusts were underestimated. Possible reasons for this underestimation include the model failing to simulate strong enough pressure gradients and not representing convective gusts. A new recalibration method was developed to estimate the true distribution of gusts at each grid point and correct for this underestimation. The recalibration model allows for storm-to-storm variation which is essential given that different storms have different degrees of model bias. The catalogue is available at www.europeanwindstorms.org.
Resumo:
The trajectories of pheromone plumes in canopied habitats, such as orchards, have been little studied. We documented the capture of male navel orangeworm moths, Amyelois transitella, in female-baited traps positioned at 5 levels, from ground level to the canopy top, at approximately 6 m above ground, in almond orchards. Males were captured in similar proportions at all levels, suggesting that they do not favor a particular height during ranging flight. A 3-D sonic anemometer was used to establish patterns of wind flow and temperature at 6 heights from 2.08 to 6.65 m in an almond orchard with a 5 m high canopy, every 3 h over 72 h. The horizontal velocity of wind flow was highest above the canopy, where its directionality also was the most consistent. During the time of A. transitella mating (0300–0600), there was a net vertical displacement upward. Vertical buoyancy combined with only minor reductions in the distance that plumes will travel in the lower compared to the upper canopy suggest that the optimal height for release of pheromone from high-release-rate sources, such as aerosol dispensers (“puffers”), that are deployed at low densities (e.g., 3 per ha.) would be at mid or low in the canopy, thereby facilitating dispersion of disruptant throughout the canopy. Optimal placement of aerosol dispensers will vary with the behavioral ecology of the target pest; however, our results suggest that current protocols, which generally propose dispenser placement in the upper third of the canopy, should be reevaluated.
Resumo:
This paper proposes a set of well defined steps to design functional verification monitors intended to verify Floating Point Units (FPU) described in HDL. The first step consists on defining the input and output domain coverage. Next, the corner cases are defined. Finally, an already verified reference model is used in order to test the correctness of the Device Under Verification (DUV). As a case study a monitor for an IEEE754-2008 compliant design is implemented. This monitor is built to be easily instantiated into verification frameworks such as OVM. Two different designs were verified reaching complete input coverage and successful compliant results.
Resumo:
The rise in boiling point of blackberry juice was experimentally measured at soluble solids concentrations in the range of 9.4 to 58.4Brix and pressures between 4.9 103 and 9.0 104 Pa (abs.). Different approaches to representing experimental data, including the Duhring`s rule, a model similar to Antoine equation and other empirical models proposed in the literature were tested. In the range of 9.4 to 33.6Brix, the rise in boiling point was nearly independent of pressure, varying only with juice concentration. Considerable deviations of this behavior began to occur at concentrations higher than 39.1Brix. Experimental data could be best predicted by adjusting an empirical model, which consists of a single equation that takes into account the dependence of rise in boiling point on pressure and concentration.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.