75 resultados para PJ7765.M3 A6 1884


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acute exercise has been shown to exhibit different effects on human sensorimotor behavior; however, the causes and mechanisms of the responses are often not clear. The primary aim of the present study was to determine the effects of incremental running until exhaustion on sensorimotor performance and adaptation in a tracking task. Subjects were randomly assigned to a running group (RG), a tracking group (TG), or a running followed by tracking group (RTG), with 10 subjects assigned to each group. Treadmill running velocity was initially set at 2.0 m s− 1, increasing by 0.5 m s− 1 every 5 min until exhaustion. Tracking consisted of 35 episodes (each 40 s) where the subjects' task was to track a visual target on a computer screen while the visual feedback was veridical (performance) or left-right reversed (adaptation). Resting electroencephalographic (EEG) activity was recorded before and after each experimental condition (running, tracking, rest). Tracking performance and the final amount of adaptation did not differ between groups. However, task adaptation was significantly faster in RTG compared to TG. In addition, increased alpha and beta power were observed following tracking in TG but not RTG although exhaustive running failed to induce significant changes in these frequency bands. Our results suggest that exhaustive running can facilitate adaptation processes in a manual tracking task. Attenuated cortical activation following tracking in the exercise condition was interpreted to indicate cortical efficiency and exercise-induced facilitation of selective central processes during actual task demands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Peeling is an essential phase of post harvesting and processing industry; however the undesirable losses and waste rate that occur during peeling stage are always the main concern of food processing sector. There are three methods of peeling fruits and vegetables including mechanical, chemical and thermal, depending on the class and type of fruit. By comparison, the mechanical method is the most preferred; this method keeps edible portions of produce fresh and creates less damage. Obviously reducing material losses and increasing the quality of the process has a direct effect on the whole efficiency of food processing industry which needs more study on technological aspects of this industrial segment. In order to enhance the effectiveness of food industrial practices it is essential to have a clear understanding of material properties and behaviour of tissues under industrial processes. This paper presents the scheme of research that seeks to examine tissue damage of tough skinned vegetables under mechanical peeling process by developing a novel FE model of the process using explicit dynamic finite element analysis approach. In the proposed study a nonlinear model which will be capable of simulating the peeling process specifically, will be developed. It is expected that unavailable information such as cutting force, maximum shearing force, shear strength, tensile strength and rupture stress will be quantified using the new FEA model. The outcomes will be used to optimize and improve the current mechanical peeling methods of this class of vegetables and thereby enhance the overall effectiveness of processing operations. Presented paper aims to review available literature and previous works have been done in this area of research and identify current gap in modelling and simulation of food processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper aims to review biomaterials used in manufacturing bone plates including advances in recent years and prospect in the future. It has found among all biomaterials, currently titanium and stainless steel alloys are the most common in production of bone plates. Other biomaterials such as Mg alloys, Ta alloys, SMAs, carbon fiber composites and bioceramics are potentially suitable for bone plates because of their advantages in biocompatibility, bioactivity and biodegradability. However, today either they are not used in bone plates or have limited applications in only some flexible small-size implants. This problem is mainly related to their poor mechanical properties. Additionally, production processes play an effective role. Therefore, in the future, further studies should be conducted to solve these problems and make them feasible for heavy-duty bone plates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

True stress-strain curve of railhead steel is required to investigate the behaviour of railhead under wheel loading through elasto-plastic Finite Element (FE) analysis. To reduce the rate of wear, the railhead material is hardened through annealing and quenching. The Australian standard rail sections are not fully hardened and hence suffer from non-uniform distribution of the material property; usage of average properties in the FE modelling can potentially induce error in the predicted plastic strains. Coupons obtained at varying depths of the railhead were, therefore, tested under axial tension and the strains were measured using strain gauges as well as an image analysis technique, known as the Particle Image Velocimetry (PIV). The head hardened steel exhibit existence of three distinct zones of yield strength; the yield strength as the ratio of the average yield strength provided in the standard (σyr=780MPa) and the corresponding depth as the ratio of the head hardened zone along the axis of symmetry are as follows: (1.17 σyr, 20%), (1.06 σyr, 20%- 80%) and (0.71 σyr, > 80%). The stress-strain curves exhibit limited plastic zone with fracture occurring at strain less than 0.1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Changing sodium intake from 70-200 mmol/day elevates blood pressure in normotensive volunteers by 6/4 mmHg. Older people, people with reduced renal function on a low sodium diet and people with a family history of hypertension are more likely to show this effect. The rise in blood pressure was associated with a fall in plasma volume suggesting that plasma volume changes do not initiate hypertension. In normotensive individuals the most common abnormality in membrane sodium transport induced by an extra sodium load was an increased permeability of the red cell to sodium. Some normotensive individuals also had an increase in the level of a plasma inhibitor that inhibited Na-K ATPase. These individuals also appeared to have a rise in blood pressure. Sodium intake and blood pressure are related. The relationship differs in different people and is probably controlled by the genetically inherited capacity of systems involved in membrane sodium transport.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Even though titanium dioxide photocatalysis has been promoted as a leading green technology for water purification, many issues have hindered its application on a large commercial scale. For the materials scientist the main issues have centred the synthesis of more efficient materials and the investigation of degradation mechanisms; whereas for the engineers the main issues have been the development of appropriate models and the evaluation of intrinsic kinetics parameters that allow the scale up or re-design of efficient large-scale photocatalytic reactors. In order to obtain intrinsic kinetics parameters the reaction must be analysed and modelled considering the influence of the radiation field, pollutant concentrations and fluid dynamics. In this way, the obtained kinetic parameters are independent of the reactor size and configuration and can be subsequently used for scale-up purposes or for the development of entirely new reactor designs. This work investigates the intrinsic kinetics of phenol degradation over titania film due to the practicality of a fixed film configuration over a slurry. A flat plate reactor was designed in order to be able to control reaction parameters that include the UV irradiance, flow rates, pollutant concentration and temperature. Particular attention was paid to the investigation of the radiation field over the reactive surface and to the issue of mass transfer limited reactions. The ability of different emission models to describe the radiation field was investigated and compared to actinometric measurements. The RAD-LSI model was found to give the best predictions over the conditions tested. Mass transfer issues often limit fixed film reactors. The influence of this phenomenon was investigated with specifically planned sets of benzoic acid experiments and with the adoption of the stagnant film model. The phenol mass transfer coefficient in the system was calculated to be km,phenol=8.5815x10-7Re0.65(ms-1). The data obtained from a wide range of experimental conditions, together with an appropriate model of the system, has enabled determination of intrinsic kinetic parameters. The experiments were performed in four different irradiation levels (70.7, 57.9, 37.1 and 20.4 W m-2) and combined with three different initial phenol concentrations (20, 40 and 80 ppm) to give a wide range of final pollutant conversions (from 22% to 85%). The simple model adopted was able to fit the wide range of conditions with only four kinetic parameters; two reaction rate constants (one for phenol and one for the family of intermediates) and their corresponding adsorption constants. The intrinsic kinetic parameters values were defined as kph = 0.5226 mmol m-1 s-1 W-1, kI = 0.120 mmol m-1 s-1 W-1, Kph = 8.5 x 10-4 m3 mmol-1 and KI = 2.2 x 10-3 m3 mmol-1. The flat plate reactor allowed the investigation of the reaction under two different light configurations; liquid and substrate side illumination. The latter of particular interest for real world applications where light absorption due to turbidity and pollutants contained in the water stream to be treated could represent a significant issue. The two light configurations allowed the investigation of the effects of film thickness and the determination of the catalyst optimal thickness. The experimental investigation confirmed the predictions of a porous medium model developed to investigate the influence of diffusion, advection and photocatalytic phenomena inside the porous titania film, with the optimal thickness value individuated at 5 ìm. The model used the intrinsic kinetic parameters obtained from the flat plate reactor to predict the influence of thickness and transport phenomena on the final observed phenol conversion without using any correction factor; the excellent match between predictions and experimental results provided further proof of the quality of the parameters obtained with the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Soufrière Hills volcano, Montserrat, West Indies, has undergone a series of dome growth and collapse events since the eruption began in 1995. Over 90% of the pyroclastic material produced has been deposited into the ocean. Sampling of these submarine deposits reveals that the pyroclastic flows mix rapidly and violently with the water as they enter the sea. The coarse components (pebbles to boulders) are deposited proximally from dense basal slurries to form steep-sided, near-linear ridges that intercalate to form a submarine fan. The finer ash-grade components are mixed into the overlying water column to form turbidity currents that flow over distances >30 km from the source. The total volume of pyroclastic material off the east coast of Montserrat exceeds 280 × 106 m3, with 65% deposited in proximal lobes and 35% deposited as distal turbidites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although ambient air pollution exposure has been linked with poor health in many parts of the world, no previous study has investigated the effect on morbidity in the city of Adelaide, South Australia. To explore the association between particulate matter (PM) and hospitalisations, including respiratory and cardiovascular admissions in Adelaide, South Australia. Methods: For the study period September 2001 to October 2007, daily counts of all-cause, cardiovascular and respiratory hospital admissions were collected, as well as daily air quality data including concentrations of particulates, ozone and nitrogen dioxide. Visibility codes for presentweather conditions identified dayswhen airborne dust or smoke was observed. The associations between PM and hospitalisations were estimated using timestratified case-crossover analyses controlling for covariates including temperature, relative humidity, other pollutants, day of the week and public holidays. Mean PM10 concentrations were higher in the warm season, whereas PM2.5 concentrations were higher in the cool season. Hospital admissions were associated with PM10 in the cool season and with PM2.5 in both seasons. No significant effect of PM on all-age respiratory admissions was detected, however cardiovascular admissions were associated with both PM2.5 and PM10 in the cool season with the highest effects for PM2.5 (4.48%, 95% CI: 0.74%, 8.36% increase per 10 μg/m3 increase in PM2.5). These findings suggest that despite the city's relatively low levels of air pollution, PMconcentrations are associated with increases in morbidity in Adelaide. Further studies are needed to investigate the sources of PM which may be contributing to the higher cool season effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The renovation of biomass waste in the form of Mahogany seed waste into bio-fuel as well as activated carbon by fixed bed pyrolysis reactor has been taken into consideration in this study. The mahogany seed in particle form is pyrolyzed in an enormously heated fixed bed reactor with nitrogen as the carrier gas. The reactor is heated from 4000C to 6000C using a external heater in which rice husk and charcoal are used as the heater biomass fuel. Reactor bed temperature, running time and feed particle size have been varied to get the optimum operating conditions of the system. The parameters are found to influence the product yields to a large extent. A maximum liquid and char yield are 49 wt. % and 35 wt. % respectively obtained at a reactor bed temperature 5000C when the running time is 90 minutes. Acquired pyrolyzed oil at these optimal process conditions were analyzed for some of their properties as an alternative fuel. The oil possesses comparable flame temperature, favorable flash point and reasonable viscosity along with somewhat higher density. The kinematic viscosity of the derived fuel is 3.8 cSt and density is 1525 kg/m3. The higher calorific value is found 32.4 MJ/kg which is significantly higher than other biomass derived fuel. Moderate adsorption capacity of the prepared activated carbon in case of methyl blue & tea water was also revealed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maize streak virus (MSV) contributes significantly to the problem of extremely low African maize yields. Whilst a diverse range of MSV and MSV-like viruses are endemic in sub-Saharan Africa and neighbouring islands, only a single group of maize-adapted variants - MSV subtypes A1 -A6 - causes severe enough disease in maize to influence yields substantially. In order to assist in designing effective strategies to control MSV in maize, a large survey covering 155 locations was conducted to assess the diversity, distribution and genetic characteristics of the Ugandan MSV-A population. PCR-restriction fragment-length polymorphism analyses of 391 virus isolates identified 49 genetic variants. Sixty-two full-genome sequences were determined, 52 of which were detectably recombinant. All but two recombinants contained predominantly MSV-A1-like sequences. Of the ten distinct recombination events observed, seven involved inter-MSV-A subtype recombination and three involved intra-MSV-A1 recombination. One of the intra-MSV-A1 recombinants, designated MSV-A1 UgIII, accounted for >60% of all MSV infections sampled throughout Uganda. Although recombination may be an important factor in the emergence of novel geminivirus variants, it is demonstrated that its characteristics in MSV are quite different from those observed in related African cassava-infecting geminivirus species. © 2007 SGM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We conducted an in-situ X-ray micro-computed tomography heating experiment at the Advanced Photon Source (USA) to dehydrate an unconfined 2.3 mm diameter cylinder of Volterra Gypsum. We used a purpose-built X-ray transparent furnace to heat the sample to 388 K for a total of 310 min to acquire a three-dimensional time-series tomography dataset comprising nine time steps. The voxel size of 2.2 μm3 proved sufficient to pinpoint reaction initiation and the organization of drainage architecture in space and time. We observed that dehydration commences across a narrow front, which propagates from the margins to the centre of the sample in more than four hours. The advance of this front can be fitted with a square-root function, implying that the initiation of the reaction in the sample can be described as a diffusion process. Novel parallelized computer codes allow quantifying the geometry of the porosity and the drainage architecture from the very large tomographic datasets (20483 voxels) in unprecedented detail. We determined position, volume, shape and orientation of each resolvable pore and tracked these properties over the duration of the experiment. We found that the pore-size distribution follows a power law. Pores tend to be anisotropic but rarely crack-shaped and have a preferred orientation, likely controlled by a pre-existing fabric in the sample. With on-going dehydration, pores coalesce into a single interconnected pore cluster that is connected to the surface of the sample cylinder and provides an effective drainage pathway. Our observations can be summarized in a model in which gypsum is stabilized by thermal expansion stresses and locally increased pore fluid pressures until the dehydration front approaches to within about 100 μm. Then, the internal stresses are released and dehydration happens efficiently, resulting in new pore space. Pressure release, the production of pores and the advance of the front are coupled in a feedback loop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.