975 resultados para Average nusselt number
Resumo:
We use interferometric synthetic aperture radar observations recorded in a land-terminating sector of western Greenland to characterise the ice sheet surface hydrology and to quantify spatial variations in the seasonality of ice sheet flow. Our data reveal a non-uniform pattern of late-summer ice speedup that, in places, extends over 100 km inland. We show that the degree of late-summer speedup is positively correlated with modelled runoff within the 10 glacier catchments of our survey, and that the pattern of late-summer speedup follows that of water routed at the ice sheet surface. In late-summer, ice within the largest catchment flows on average 48% faster than during winter, whereas changes in smaller catchments are less pronounced. Our observations show that the routing of seasonal runoff at the ice sheet surface plays an important role in shaping the magnitude and extent of seasonal ice sheet speedup.
Resumo:
An inflatable drill-string packer was used at Site 839 to measure the bulk in-situ permeability within basalts cored in Hole 839B. The packer was inflated at two depths, 398.2 and 326.9 mbsf; all on-board information indicated that the packer mechanically closed off the borehole, although apparently the packer hydraulically sealed the borehole only at 398.2 mbsf. Two pulse tests were run at each depth, two constant-rate injection tests were run at the first set, and four were run at the second. Of these, only the constant-rate injection tests at the first set yielded a permeability, calculated as ranging from 1 to 5 * 10**-12 m**2. Pulse tests and constant-rate injection tests for the second set did not yield valid data. The measured permeability is an upper limit; if the packer leaked during the experiments, the basalt would be less permeable. In comparison, permeabilities measured at other Deep Sea Drilling Project and Ocean Drilling Program sites in pillow basalts and flows similar to those measured in Hole 839B are mainly about 10**-13 to 10**-14 m**2. Thus, if our results are valid, the basalts at Site 839 are more permeable than ocean-floor basalts investigated elsewhere. Based on other supporting evidence, we consider these results to be a valid measure of the permeability of the basalts. Temperature data and the geochemical and geotechnical properties of the drilled sediments all indicate that the site is strongly affected by fluid flow. The heat flow is very much less than expected in young oceanic basalts, probably a result of rapid fluid circulation through the crust. The geochemistry of pore fluids is similar to that of seawater, indicating seawater flow through the sediments, and sediments are uniformly underconsolidated for their burial depth, again indicating probable fluid flow. The basalts are highly vesicular. However, the vesicularity can only account for part of the average porosity measured on the neutron porosity well log; the remainder of the measured porosity is likely present as voids and fractures within and between thin-bedded basalts. Core samples, together with porosity, density, and resistivity well-log data show locations where the basalt section is thin bedded and probably has from 15% to 35% void and fracture porosity. Thus, the measured permeability seems reasonable with respect to the high measured porosity. Much of the fluid flow at Site 839 could be directed through highly porous and permeable zones within and between the basalt flows and in the sediment layer just above the basalt. Thus, the permeability measurements give an indication of where and how fluid flow may occur within the oceanic crust of the Lau Basin.
Resumo:
Mode of access: Internet.
Resumo:
The effects of particulate matter on environment and public health have been widely studied in recent years. A number of studies in the medical field have tried to identify the specific effect on human health of particulate exposure, but agreement amongst these studies on the relative importance of the particles’ size and its origin with respect to health effects is still lacking. Nevertheless, air quality standards are moving, as the epidemiological attention, towards greater focus on the smaller particles. Current air quality standards only regulate the mass of particulate matter less than 10 μm in aerodynamic diameter (PM10) and less than 2.5 μm (PM2.5). The most reliable method used in measuring Total Suspended Particles (TSP), PM10, PM2.5 and PM1 is the gravimetric method since it directly measures PM concentration, guaranteeing an effective traceability to international standards. This technique however, neglects the possibility to correlate short term intra-day variations of atmospheric parameters that can influence ambient particle concentration and size distribution (emission strengths of particle sources, temperature, relative humidity, wind direction and speed and mixing height) as well as human activity patterns that may also vary over time periods considerably shorter than 24 hours. A continuous method to measure the number size distribution and total number concentration in the range 0.014 – 20 μm is the tandem system constituted by a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). In this paper, an uncertainty budget model of the measurement of airborne particle number, surface area and mass size distributions is proposed and applied for several typical aerosol size distributions. The estimation of such an uncertainty budget presents several difficulties due to i) the complexity of the measurement chain, ii) the fact that SMPS and APS can properly guarantee the traceability to the International System of Measurements only in terms of number concentration. In fact, the surface area and mass concentration must be estimated on the basis of separately determined average density and particle morphology. Keywords: SMPS-APS tandem system, gravimetric reference method, uncertainty budget, ultrafine particles.
Resumo:
Background, Aim and Scope The impact of air pollution on school children’s health is currently one of the key foci of international and national agencies. Of particular concern are ultrafine particles which are emitted in large quantities, contain large concentrations of toxins and are deposited deeply in the respiratory tract. Materials and methods In this study, an intensive sampling campaign of indoor and outdoor airborne particulate matter was carried out in a primary school in February 2006 to investigate indoor and outdoor particle number (PN) and mass concentrations (PM2.5), and particle size distribution, and to evaluate the influence of outdoor air pollution on the indoor air. Results For outdoor PN and PM2.5, early morning and late afternoon peaks were observed on weekdays, which are consistent with traffic rush hours, indicating the predominant effect of vehicular emissions. However, the temporal variations of outdoor PM2.5 and PN concentrations occasionally showed extremely high peaks, mainly due to human activities such as cigarette smoking and the operation of mower near the sampling site. The indoor PM2.5 level was mainly affected by the outdoor PM2.5 (r = 0.68, p<0.01), whereas the indoor PN concentration had some association with outdoor PN values (r = 0.66, p<0.01) even though the indoor PN concentration was occasionally influenced by indoor sources, such as cooking, cleaning and floor polishing activities. Correlation analysis indicated that the outdoor PM2.5 was inversely correlated with the indoor to outdoor PM2.5 ratio (I/O ratio) (r = -0.49, p<0.01), while the indoor PN had a weak correlation with the I/O ratio for PN (r = 0.34, p<0.01). Discussion and Conclusions The results showed that occupancy did not cause any major changes to the modal structure of particle number and size distribution, even though the I/O ratio was different for different size classes. The I/O curves had a maximum value for particles with diameters of 100 – 400 nm under both occupied and unoccupied scenarios, whereas no significant difference in I/O ratio for PM2.5 was observed between occupied and unoccupied conditions. Inspection of the size-resolved I/O ratios in the preschool centre and the classroom suggested that the I/O ratio in the preschool centre was the highest for accumulation mode particles at 600 nm after school hours, whereas the average I/O ratios of both nucleation mode and accumulation mode particles in the classroom were much lower than those of Aitken mode particles. Recommendations and Perspectives The findings obtained in this study are useful for epidemiological studies to estimate the total personal exposure of children, and to develop appropriate control strategies for minimizing the adverse health effects on school children.
Resumo:
An elevated particle number concentration (PNC) observed during nucleation events could play a significant contribution to the total particle load and therefore to the air pollution in the urban environments. Therefore, a field measurement study of PNC was commenced to investigate the temporal and spatial variations of PNC within the urban airshed of Brisbane, Australia. PNC was monitored at urban (QUT), roadside (WOO) and semi-urban (ROC) areas around the Brisbane region during 2009. During the morning traffic peak period, the highest relative fraction of PNC reached about 5% at QUT and WOO on weekdays. PNC peaks were observed around noon, which correlated with the highest solar radiation levels at all three stations, thus suggesting that high PNC levels were likely to be associated with new particle formation caused by photochemical reactions. Wind rose plots showed relatively higher PNC for the NE direction, which was associated with industrial pollution, accounting for 12%, 9% and 14% of overall PNC at QUT, WOO and ROC, respectively. Although there was no significant correlation between PNC at each station, the variation of PNC was well correlated among three stations during regional nucleation events. In addition, PNC at ROC was significantly influenced by upwind urban pollution during the nucleation burst events, with the average enrichment factor of 15.4. This study provides an insight into the influence of regional nucleation events on PNC in the Brisbane region and it the first study to quantify the effect of urban pollution on semi-urban PNC through the nucleation events. © 2012 Author(s).
Resumo:
Numerical investigation on mixed convection of a two-dimensional incompressible laminar flow over a horizontal flat plate with streamwise sinusoidal distribution of surface temperature has been performed for different values of Rayleigh number, Reynolds number and frequency of periodic temperature for constant Prandtl number and amplitude of periodic temperature. Finite element method adapted to rectangular non-uniform mesh elements by a non-linear parametric solution algorithm basis numerical scheme has been employed. The investigating parameters are the Rayleigh number, the Reynolds number and frequency of periodic temperature. The effect of variation of individual investigating parameters on mixed convection flow characteristics has been studied to observe the hydrodynamic and thermal behavior for while keeping the other parameters constant. The fluid considered in this study is air with Prandtl number 0.72. The results are obtained for the Rayleigh number range of 102 to 104, Reynolds number ranging from 1 to 100 and the frequency of periodic temperature from 1 to 5. Isotherms, streamlines, average and local Nusselt numbers are presented to show the effect of the different values of aforementioned investigating parameters on fluid flow and heat transfer.
Resumo:
Average speed enforcement is a relatively new approach gaining popularity throughout Europe and Australia. This paper reviews the evidence regarding the impact of this approach on vehicle speeds, crashes rates and a number of additional road safety and public health outcomes. The economic and practical viability of the approach as a road safety countermeasure is also explored. A literature review, with an international scope, of both published and grey literature was conducted. There is a growing body of evidence to suggest a number of road safety benefits associated with average speed enforcement, including high rates of compliance with speed limits, reductions in average and 85th percentile speeds and reduced speed variability between vehicles. Moreover, the approach has been demonstrated to be particularly effective in reducing excessive speeding behaviour. Reductions in crash rates have also been reported in association with average speed enforcement, particularly in relation to fatal and serious injury crashes. In addition, the approach has been shown to improve traffic flow, reduce vehicle emissions and has also been associated with high levels of public acceptance. Average speed enforcement offers a greater network-wide approach to managing speeds that reduces the impact of time and distance halo effects associated with other automated speed enforcement approaches. Although comparatively expensive it represents a highly reliable approach to speed enforcement that produces considerable returns on investment through reduced social and economic costs associated with crashes.
Resumo:
Data on free convection heat transfer to water and mercury are collected using a test rig in vertical annuli of three radii ratios, the walls of which are maintained at uniform temperatures. A theoretical analysis of the boundary layer equations has been attempted using local similarity transformation and double boundary layer approach. Correlations derived from the present theoretical analysis are compared with the analysis and the experimental data available in literature for non-metallic fluids and also with the present experimental data on water and mercury. Generalised correlations are set up for expressing the ratio of heat transferred by convection to the heat transferred by pure conduction and Nusselt's number, in terms of Grashof, Rayleigh and Prandtl numbers, based on the theoretical analysis and the present data on mercury and water. The present generalised correlations agree with the reported and present data for non-metallic fluids and liquid metals with an average deviation of 9% and maximum deviation of ± 13.7%.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.
Resumo:
A $k$-box $B=(R_1,...,R_k)$, where each $R_i$ is a closed interval on the real line, is defined to be the Cartesian product $R_1\times R_2\times ...\times R_k$. If each $R_i$ is a unit length interval, we call $B$ a $k$-cube. Boxicity of a graph $G$, denoted as $\boxi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-boxes. Similarly, the cubicity of $G$, denoted as $\cubi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-cubes. It was shown in [L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan: Representing graphs as the intersection of axis-parallel cubes. MCDES-2008, IISc Centenary Conference, available at CoRR, abs/cs/ 0607092, 2006.] that, for a graph $G$ with maximum degree $\Delta$, $\cubi(G)\leq \lceil 4(\Delta +1)\log n\rceil$. In this paper, we show that, for a $k$-degenerate graph $G$, $\cubi(G) \leq (k+2) \lceil 2e \log n \rceil$. Since $k$ is at most $\Delta$ and can be much lower, this clearly is a stronger result. This bound is tight. We also give an efficient deterministic algorithm that runs in $O(n^2k)$ time to output a $8k(\lceil 2.42 \log n\rceil + 1)$ dimensional cube representation for $G$. An important consequence of the above result is that if the crossing number of a graph $G$ is $t$, then $\boxi(G)$ is $O(t^{1/4}{\lceil\log t\rceil}^{3/4})$ . This bound is tight up to a factor of $O((\log t)^{1/4})$. We also show that, if $G$ has $n$ vertices, then $\cubi(G)$ is $O(\log n + t^{1/4}\log t)$. Using our bound for the cubicity of $k$-degenerate graphs we show that cubicity of almost all graphs in $\mathcal{G}(n,m)$ model is $O(d_{av}\log n)$, where $d_{av}$ denotes the average degree of the graph under consideration. model is O(davlogn).
Resumo:
We consider a discrete time system with packets arriving randomly at rate lambda per slot to a fading point-to-point link, for which the transmitter can control the number of packets served in a slot by varying the transmit power. We provide an asymptotic characterization of the minimum average delay of the packets, when average transmitter power is a small positive quantity V more than the minimum average power required for queue stability. We show that the minimum average delay will grow either as log (1/V) or 1/V when V down arrow 0, for certain sets of values of lambda. These sets are determined by the distribution of fading gain, the maximum number of packets which can be transmitted in a slot, and the assumed transmit power function, as a function of the fading gain and the number of packets transmitted. We identify a case where the above behaviour of the tradeoff differs from that obtained from a previously considered model, in which the random queue length process is assumed to evolve on the non-negative real line.