63 resultados para Design verification of VLSI circuits
em CentAUR: Central Archive University of Reading - UK
Resumo:
A wind catcher/tower natural ventilation system was installed in a seminar room in the building of the School of Construction Management and Engineering, the University of Reading in the UK . Performance was analysed by means of ventilation tracer gas measurements, indoor climate measurements (temperature, humidity, CO2) and occupant surveys. In addition, the potential of simple design tools was evaluated by comparing observed ventilation results with those predicted by an explicit ventilation model and the AIDA implicit ventilation model. To support this analysis, external climate parameters (wind speed and direction, solar radiation, external temperature and humidity) were also monitored. The results showed the chosen ventilation design provided a substantially greater ventilation rate than an equivalent area of openable window. Also air quality parameters stayed within accepted norms while occupants expressed general satisfaction with the system and with comfort conditions. Night cooling was maximised by using the system in combination with openable windows. Comparisons of calculations with ventilation rate measurements showed that while AIDA gave reasonably correlated results with the monitored performance results, the widely used industry explicit model was found to over estimate the monitored ventilation rate.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.
Resumo:
The HIRDLS instrument contains 21 spectral channels spanning a wavelength range from 6 to 18mm. For each of these channels the spectral bandwidth and position are isolated by an interference bandpass filter at 301K placed at an intermediate focal plane of the instrument. A second filter cooled to 65K positioned at the same wavelength but designed with a wider bandwidth is placed directly in front of each cooled detector element to reduce stray radiation from internally reflected in-band signals, and to improve the out-of-band blocking. This paper describes the process of determining the spectral requirements for the two bandpass filters and the antireflection coatings used on the lenses and dewar window of the instrument. This process uses a system throughput performance approach taking the instrument spectral specification as a target. It takes into account the spectral characteristics of the transmissive optical materials, the relative spectral response of the detectors, thermal emission from the instrument, and the predicted atmospheric signal to determine the radiance profile for each channel. Using this design approach an optimal design for the filters can be achieved, minimising the number of layers to improve the in-band transmission and to aid manufacture. The use of this design method also permits the instrument spectral performance to be verified using the measured response from manufactured components. The spectral calculations for an example channel are discussed, together with the spreadsheet calculation method. All the contributions made by the spectrally active components to the resulting instrument channel throughput are identified and presented.
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
This paper shows the process of the virtual production development of the mechanical connection between the top leaf of a dual composite leaf spring system to a shackle using finite element methods. The commercial FEA package MSC/MARC has been used for the analysis. In the original design the joint was based on a closed eye-end. Full scale testing results showed that this configuration achieved the vertical proof load of 150 kN and 1 million cycles of fatigue load. However, a problem with delamination occurred at the interface between the fibres going around the eye and the main leaf body. To overcome this problem, a second design was tried using transverse bandages of woven glass fibre reinforced tape to wrap the section that is prone to delaminate. In this case, the maximum interlaminar shear stress was reduced by a certain amount but it was still higher than the material’s shear strength. Based on the fact that, even with delamination, the top leaf spring still sustained the maximum static and fatigue loads required, the third design was proposed with an open eye-end, eliminating altogether the interface where the maximum shear stress occurs. The maximum shear stress predicted by FEA is reduced significantly and a safety factor of around 2 has been obtained. Thus, a successful and safe design has been achieved.
Resumo:
Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society
Resumo:
This paper presents a completely new design of a bogie-frame made of glass fibre reinforced composites and its performance under various loading conditions predicted by finite element analysis. The bogie consists of two frames, with one placed on top of the other, and two axle ties connecting the axles. Each frame consists of two side arms and a transom between. The top frame is thinner and more compliant and has a higher curvature compared with the bottom frame. Variable vertical stiffness can be achieved before and after the contact between the two frames at the central section of the bogie to cope with different load levels. Finite element analysis played a very important role in the design of this structure. Stiffness and stress levels of the full scale bogie presented in this paper under various loading conditions have been predicted by using Marc provided by MSC Software. In order to verify the finite element analysis (FEA) models, a fifth scale prototype of the bogie has been made and tested under quasi-static loading conditions. Results of testing on the fifth scale bogie have been used to fine tune details like contact and friction in the fifth scale FEA models. These conditions were then applied to the full scale models. Finite element analysis results show that the stress levels in all directions are low compared with material strengths.
Resumo:
The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.