893 resultados para Visual Divided Field


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last few decades, electric and electromagnetic fields have achieved important role as stimulator and therapeutic facility in biology and medicine. In particular, low magnitude, low frequency, pulsed electromagnetic field has shown significant positive effect on bone fracture healing and some bone diseases treatment. Nevertheless, to date, little attention has been paid to investigate the possible effect of high frequency, high magnitude pulsed electromagnetic field (pulse power) on functional behaviour and biomechanical properties of bone tissue. Bone is a dynamic, complex organ, which is made of bone materials (consisting of organic components, inorganic mineral and water) known as extracellular matrix, and bone cells (live part). The cells give the bone the capability of self-repairing by adapting itself to its mechanical environment. The specific bone material composite comprising of collagen matrix reinforced with mineral apatite provides the bone with particular biomechanical properties in an anisotropic, inhomogeneous structure. This project hypothesized to investigate the possible effect of pulse power signals on cortical bone characteristics through evaluating the fundamental mechanical properties of bone material. A positive buck-boost converter was applied to generate adjustable high voltage, high frequency pulses up to 500 V and 10 kHz. Bone shows distinctive characteristics in different loading mode. Thus, functional behaviour of bone in response to pulse power excitation were elucidated by using three different conventional mechanical tests applying three-point bending load in elastic region, tensile and compressive loading until failure. Flexural stiffness, tensile and compressive strength, hysteresis and total fracture energy were determined as measure of main bone characteristics. To assess bone structure variation due to pulse power excitation in deeper aspect, a supplementary fractographic study was also conducted using scanning electron micrograph from tensile fracture surfaces. Furthermore, a non-destructive ultrasonic technique was applied for determination and comparison of bone elasticity before and after pulse power stimulation. This method provided the ability to evaluate the stiffness of millimetre-sized bone samples in three orthogonal directions. According to the results of non-destructive bending test, the flexural elasticity of cortical bone samples appeared to remain unchanged due to pulse power excitation. Similar results were observed in the bone stiffness for all three orthogonal directions obtained from ultrasonic technique and in the bone stiffness from the compression test. From tensile tests, no significant changes were found in tensile strength and total strain energy absorption of the bone samples exposed to pulse power compared with those of the control samples. Also, the apparent microstructure of the fracture surfaces of PP-exposed samples (including porosity and microcracks diffusion) showed no significant variation due to pulse power stimulation. Nevertheless, the compressive strength and toughness of millimetre-sized samples appeared to increase when the samples were exposed to 66 hours high power pulsed electromagnetic field through screws with small contact cross-section (increasing the pulsed electric field intensity) compare to the control samples. This can show the different load-bearing characteristics of cortical bone tissue in response to pulse power excitation and effectiveness of this type of stimulation on smaller-sized samples. These overall results may address that although, the pulse power stimulation can influence the arrangement or the quality of the collagen network causing the bone strength and toughness augmentation, it apparently did not affect the mineral phase of the cortical bone material. The results also confirmed that the indirect application of high power pulsed electromagnetic field at 500 V and 10 kHz through capacitive coupling method, was athermal and did not damage the bone tissue construction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Almost every nation on the planet is experiencing increases in both the number and proportion of older adults. Research has shown that older adults use technology less intuitively than younger adults, and have more difficulty with using products effectively. With an ever-increasing population of older adults, it is necessary to understand why they often struggle to use technology, which is becoming more and more important in day to day living. Intuitive use of products is grounded in familiarity and prior experience. The aims of this research were twofold: (i) to examine the differences in familiarity between younger and older adults, to see if this could explain the difficulties faced by some older adults; (ii) to develop investigational methods to assist designers in identifying familiarity in prospective users. Two empirical studies were conducted. The first experiment was conducted in the field with 32 participants, divided across four age groups (18 – 44, 45 – 59, 60 – 74, and 75+). This experiment was conducted in the participants’ homes, with a product they were familiar with. Familiarity was measured through the analysis of data collected through interviews, observation and retrospective protocol. The results of this study show that the youngest group demonstrated significantly higher levels of familiarity with products they own than the 60 – 74 and the 75+ age groups. There were no significant differences between the 18 – 44 age group and the 45 – 59 age group and there were also no significant differences between the three oldest age groups. The second experiment was conducted with 32 participants, across the same four age groups. Four everyday products were used in this experiment. The results of Experiment 2 show that, with previously unused products, younger adults demonstrate significantly higher levels of familiarity than the three older age groups. The three oldest age groups had no significant differences between them. The results of these two studies show that younger adults are more familiar with contemporary products than older adults. They also demonstrate that in terms of familiarity, older adults do not differ significantly as they get older. The results also show that the 45 – 59 age group demonstrate higher levels of familiarity with products they have owned, in comparison with those they have not. The two older age groups did not demonstrate such differences. This suggests that interacting with products over time increases familiarity more for middle-aged adults than for older adults. As a result of this research, a method that can be used by designers to identify potential users’ product familiarity has been identified. This method is easy to use, quick, low cost, highly mobile, flexible, and allows for easy data collection and analysis. A tool has been designed that assists designers and researchers to use the method. Designers can use the knowledge gained from this tool, and integrate it into the design process, resulting in more intuitive products. Such products may lead to improvements in the quality of life of older adults, as a result of improved societal integration, better health management, and more widespread use of communications technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effect of very small air gaps (less than 1 mm) on the dosimetry of small photon fields used for stereotactic treatments. Measurements were performed with optically stimulated luminescent dosimeters (OSLDs) for 6 MV photons on a Varian 21iX linear accelerator with a Brainlab μMLC attachment for square field sizes down to 6 mm × 6 mm. Monte Carlo simulations were performed using EGSnrc C++ user code cavity. It was found that the Monte Carlo model used in this study accurately simulated the OSLD measurements on the linear accelerator. For the 6 mm field size, the 0.5 mm air gap upstream to the active area of the OSLD caused a 5.3 % dose reduction relative to a Monte Carlo simulation with no air gap. A hypothetical 0.2 mm air gap caused a dose reduction > 2 %, emphasizing the fact that even the tiniest air gaps can cause a large reduction in measured dose. The negligible effect on an 18 mm field size illustrated that the electronic disequilibrium caused by such small air gaps only affects the dosimetry of the very small fields. When performing small field dosimetry, care must be taken to avoid any air gaps, as can be often present when inserting detectors into solid phantoms. It is recommended that very small field dosimetry is performed in liquid water. When using small photon fields, sub-millimetre air gaps can also affect patient dosimetry if they cannot be spatially resolved on a CT scan. However the effect on the patient is debatable as the dose reduction caused by a 1 mm air gap, starting out at 19% in the first 0.1 mm behind the air gap, decreases to < 5 % after just 2 mm, and electronic equilibrium is fully re-established after just 5 mm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent efforts in mission planning for underwater vehicles have utilised predictive models to aid in navigation, optimal path planning and drive opportunistic sampling. Although these models provide information at a unprecedented resolutions and have proven to increase accuracy and effectiveness in multiple campaigns, most are deterministic in nature. Thus, predictions cannot be incorporated into probabilistic planning frameworks, nor do they provide any metric on the variance or confidence of the output variables. In this paper, we provide an initial investigation into determining the confidence of ocean model predictions based on the results of multiple field deployments of two autonomous underwater vehicles. For multiple missions conducted over a two-month period in 2011, we compare actual vehicle executions to simulations of the same missions through the Regional Ocean Modeling System in an ocean region off the coast of southern California. This comparison provides a qualitative analysis of the current velocity predictions for areas within the selected deployment region. Ultimately, we present a spatial heat-map of the correlation between the ocean model predictions and the actual mission executions. Knowing where the model provides unreliable predictions can be incorporated into planners to increase the utility and application of the deterministic estimations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To develop a rapid optimized technique of wide-field imaging of the human corneal subbasal nerve plexus. A dynamic fixation target was developed and, coupled with semiautomated tiling software, a rapid method of capturing and montaging multiple corneal confocal microscopy images was created. To illustrate the utility of this technique, wide-field maps of the subbasal nerve plexus were produced in 2 participants with diabetes, 1 with and 1 without neuropathy. The technique produced montages of the central 3 mm of the subbasal corneal nerve plexus. The maps seem to show a general reduction in the number of nerve fibers and branches in the diabetic participant with neuropathy compared with the individual without neuropathy. This novel technique will allow more routine and widespread use of subbasal nerve plexus mapping in clinical and research situations. The significant reduction in the time to image the corneal subbasal nerve plexus should expedite studies of larger groups of diabetic patients and those with other conditions affecting nerve fibers. The inferior whorl and the surrounding areas may show the greatest loss of nerve fibers in individuals with diabetic neuropathy, but this should be further investigated in a larger cohort.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The onsite treatment of sewage and effluent disposal within the premises is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the seemingly low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. Therefore it is important that careful consideration is given to the design and location of onsite sewage treatment systems. It requires an understanding of the factors that influence treatment performance. The use of subsurface effluent absorption systems is the most common form of effluent disposal for onsite sewage treatment and particularly for septic tanks. Additionally in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Therefore location specific factors will play a key role in this context. The project The primary aims of the research project are: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to identify important areas where there is currently a lack of relevant research knowledge and is in need of further investigation. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of the research project has been on septic tanks. Therefore by implication the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. In the evaluation to be undertaken, the treatment performance of soil absorption systems will be related to the physico-chemical characteristics of the soil. Five broad categories of soil types have been considered for this purpose. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each soil types. In the initial phase of the investigation, though the majority of the systems evaluated were septic tanks, a small number of aerobic wastewater treatment systems (AWTS) were also included. This was primarily to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of different types of systems investigated was relatively small. As such this does not permit a statistical analysis to be undertaken of the results obtained. This is an important issue considering the large number of parameters that can influence treatment performance and their wide variability. The report This report is the second in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The work undertaken included site investigation and testing of sewage effluent and soil samples taken at distances of 1 and 3 m from the effluent disposal area. The project component discussed in the current report formed the basis for the more detailed investigation undertaken subsequently. The outcomes from the initial studies have been discussed, which enabled the identification of factors to be investigated further. Primarily, this report contains the results of the field monitoring program, the initial analysis undertaken and preliminary conclusions. Field study and outcomes Initially commencing with a list of 252 locations in 17 different suburbs, a total of 22 sites in 21 different locations were monitored. These sites were selected based on predetermined criteria. To obtain house owner agreement to participate in the monitoring study was not an easy task. Six of these sites had to be abandoned subsequently due to various reasons. The remaining sites included eight septic systems with subsurface effluent disposal and treating blackwater or combined black and greywater, two sites treating greywater only and six sites with AWTS. In addition to collecting effluent and soil samples from each site, a detailed field investigation including a series of house owner interviews were also undertaken. Significant observations were made during the field investigations. In addition to site specific observations, the general observations include the following: • Most house owners are unaware of the need for regular maintenance. Sludge removal has not been undertaken in any of the septic tanks monitored. Even in the case of aerated wastewater treatment systems, the regular inspections by the supplier is confined only to the treatment system and does not include the effluent disposal system. This is not a satisfactory situation as the investigations revealed. • In the case of separate greywater systems, only one site had a suitably functioning disposal arrangement. The general practice is to employ a garden hose to siphon the greywater for use in surface irrigation of the garden. • In most sites, the soil profile showed significant lateral percolation of effluent. As such, the flow of effluent to surface water bodies is a distinct possibility. • The need to investigate the subsurface condition to a depth greater than what is required for the standard percolation test was clearly evident. On occasion, seemingly permeable soil was found to have an underlying impermeable soil layer or vice versa. The important outcomes from the testing program include the following: • Though effluent treatment is influenced by the physico-chemical characteristics of the soil, it was not possible to distinguish between the treatment performance of different soil types. This leads to the hypothesis that effluent renovation is significantly influenced by the combination of various physico-chemical parameters rather than single parameters. This would make the processes involved strongly site specific. • Generally the improvement in effluent quality appears to take place only within the initial 1 m of travel and without any appreciable improvement thereafter. This relates only to the degree of improvement obtained and does not imply that this quality is satisfactory. This calls into question the value of adopting setback distances from sensitive water bodies. • Use of AWTS for sewage treatment may provide effluent of higher quality suitable for surface disposal. However on the whole, after a 1-3 m of travel through the subsurface, it was not possible to distinguish any significant differences in quality between those originating from septic tanks and AWTS. • In comparison with effluent quality from a conventional wastewater treatment plant, most systems were found to perform satisfactorily with regards to Total Nitrogen. The success rate was much lower in the case of faecal coliforms. However it is important to note that five of the systems exhibited problems with regards to effluent disposal, resulting in surface flow. This could lead to possible contamination of surface water courses. • The ratio of TDS to EC is about 0.42 whilst the optimum recommended value for use of treated effluent for irrigation should be about 0.64. This would mean a higher salt content in the effluent than what is advisable for use in irrigation. A consequence of this would be the accumulation of salts to a concentration harmful to crops or the landscape unless adequate leaching is present. These relatively high EC values are present even in the case of AWTS where surface irrigation of effluent is being undertaken. However it is important to note that this is not an artefact of the treatment process but rather an indication of the quality of the wastewater generated in the household. This clearly indicates the need for further research to evaluate the suitability of various soil types for the surface irrigation of effluent where the TDS/EC ratio is less than 0.64. • Effluent percolating through the subsurface absorption field may travel in the form of dilute pulses. As such the effluent will move through the soil profile forming fronts of elevated parameter levels. • The downward flow of effluent and leaching of the soil profile is evident in the case of podsolic, lithosol and kransozem soils. Lateral flow of effluent is evident in the case of prairie soils. Gleyed podsolic soils indicate poor drainage and ponding of effluent. In the current phase of the research project, a number of chemical indicators such as EC, pH and chloride concentration were employed as indicators to investigate the extent of effluent flow and to understand how soil renovates effluent. The soil profile, especially texture, structure and moisture regime was examined more in an engineering sense to determine the effect of movement of water into and through the soil. However it is not only the physical characteristics, but the chemical characteristics of the soil also play a key role in the effluent renovation process. Therefore in order to understand the complex processes taking place in a subsurface effluent disposal area, it is important that the identified influential parameters are evaluated using soil chemical concepts. Consequently the primary focus of the next phase of the research project will be to identify linkages between various important parameters. The research thus envisaged will help to develop robust criteria for evaluating the performance of subsurface disposal systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a reactive Sense and Avoid approach using spherical image-based visual servoing. Avoidance of point targets in the lateral or vertical plane is achieved without requiring an estimate of range. Simulated results for static and dynamic targets are provided using a realistic model of a small fixed wing unmanned aircraft.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a reactive collision avoidance method for small unmanned rotorcraft using spherical image-based visual servoing. Only a single point feature is used to guide the aircraft in a safe spiral like trajectory around the target, whilst a spherical camera model ensures the target always remains visible. A decision strategy to stop the avoidance control is derived based on the properties of spiral like motion, and the effect of accurate range measurements on the control scheme is discussed. We show that using a poor range estimate does not significantly degrade the collision avoidance performance, thus relaxing the need for accurate range measurements. We present simulated and experimental results using a small quad rotor to validate the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When wheels pass over insulated rail joints (IRJs) a vertical impact force is generated. The ability to measure the impact force is valuable as the force signature helps understand the behaviour of the IRJs, in particular their potential for failure. The impact forces are thought to be one of the main factors that cause damage to the IRJ and track components. Study of the deterioration mechanism helps finding new methods to improve the service life of IRJs in track. In this research, the strain-gage-based wheel load detector, for the first time, is employed to measure the wheel–rail contact-impact force at an IRJ in a heavy haul rail line. In this technique, the strain gages are installed within the IRJ assembly without disturbing the structural integrity of IRJ and arranged in a full wheatstone bridge to form a wheel load detector. The instrumented IRJ is first tested and calibrated in the lab and then installed in the field. For comparison purposes, a reference rail section is also instrumented with the same strain gage pattern as the IRJ. In this paper the measurement technique, the process of instrumentation, and tests as well as some typical data obtained from the field and the inferences are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is the sixth part of a Letter from the Editor series where the results are presented of an ongoing research undertaken in order to investigate the dynamic of the evolution of the field of project management and the key trends. Dynamics of networks is a key feature in strategic diagrams analysis. The radical change in the configuration of a network between two periods, or the change at subnetwork level reflects the dynamic of science. I present here an example of subnetwork comparison over the four periods of time considered in this study. I will develop and discuss an example of subnetwork transformation in future Letter from the Editor article..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is the fifth part of a Letter From the Editor series where the results are presented of an ongoing research undertaken in order to investigate the dynamic of the evolution of the field of project management and the key trends. I present some general findings and the strategic diagrams generated for each of the time periods introduced herein and discuss what we can learn from them on a general standpoint. I will develop and discuss some detailed findings in future Letter From the Editor articles...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper documents the use of bibliometrics as a methodology to bring forth a structured, systematic and rigorous way to analyse and evaluate a range of literature. When starting out and reading broadly for my doctoral studies, one article by Trigwell and Prosser (1996b) led me to reflect about my level of comprehension as the content, concepts and methodology did not resonate with my epistemology. A disconnection between our paradigms emerged. Further reading unveiled the work by Doyle (1987) who categorised research in teaching and teacher education by three main areas: teacher characteristics, methods research and teacher behaviour. My growing concerns that there were gaps in the knowledge also exposed the difficulties in documenting said gaps. As an early researcher who required support to locate myself in the field and to find my research voice, I identified bibliometrics (Budd, 1988; Yeoh & Kaur, 2007) as an appropriate methodology to add value and rigour in three ways. Firstly, the application of bibliometrics to analyse articles is systematic, builds a picture from the characteristics of the literature, and offers a way to elicit themes within the categories. Secondly, by systematic analysis there is occasion to identify gaps within the body of work, limitations in methodology or areas in need of further research. Finally, extension and adaptation of the bibliometrics methodology, beyond citation or content analysis, to investigate the merit of methodology, participants and instruments as a determinant for research worth allowed the researcher to build confidence and contribute new knowledge to the field. Therefore, this paper frames research in the pedagogic field of Higher Education through teacher characteristics, methods research and teacher behaviour, visually represents the literature analysis and locates my research self within methods research. Through my research voice I will present the bibliometrics methodology, the outcomes and document the landscape of pedagogy in the field of Higher Education.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. METHODS: 36 visually normal participants (aged 19 – 80 years), completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields. and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus, and sensitivity for displacement in a random-dot kinematogram (Dmin). Participants also completed a hazard perception test (HPT) which measured participants’ response times to hazards embedded in video recordings of real world driving which has been shown to be linked to crash risk. RESULTS: Dmin for the random-dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random-dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. CONCLUSION: These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception in order to develop better interventions to improve road safety.