945 resultados para (2D)2PCA
Resumo:
Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
The realistic strength and deflection behavior of industrial and commercial steel portal frame buildings are understood only if the effects of rigidity of end frames and profiled steel claddings are included. The conventional designs ignore these effects and are very much based on idealized two-dimensional (2D) frame behavior. Full-scale tests of a 1212 m steel portal frame building under a range of design load cases indicated that the observed deflections and bending moments in the portal frame were considerably different from those obtained from a 2D analysis of frames ignoring these effects. Three-dimensional (3D) analyses of the same building, including the effects of end frames and cladding, were carried out, and the results agreed well with full-scale test results. Results clearly indicated the need for such an analysis and for testing to study the true behavior of steel portal frame buildings. It is expected that such a 3D analysis will lead to lighter steel frames as the maximum moments and deflections are reduced.
Resumo:
Organ motion as a result of respiration is an important field of research for medical physics. Knowledge of magnitude and direction of this motion is necessary to allow for more accurate radiotherapy treatment planning. This will result in higher doses to the tumour whilst sparing healthy tissue. This project involved human trials, where the radiation therapy patient's kidneys were CT scanned under three different conditions; whilst free breathing (FB), breath-hold at normal tidal inspiration (BHIN), and breath-hold at normal tidal expiration (BHEX). The magnitude of motion was measured by recording the outline of the kidney from a Beam's Eye View (BEV). The centre of mass of this 2D shape was calculated for each set using "ImageJ" software and the magnitude of movement determined from the change in the centroid's coordinates between the BHIN and BHEX scans. The movement ranged from, for the left and right kidneys, 4-46mm and 2-44mm in the superior/inferior (axial) plane, 1-21mm and 2- 16mm in the anterior/posterior (coronal) plane, and 0-6mm and 0-8mm in the lateral/medial (sagittal) plane. From exhale to inhale, the kidneys tended to move inferiorly, anteriorly and laterally. A standard radiotherapy plan, designed to treat the para-aortics with opposed lateral fields was performed on the free breathing (planning) CT set. The field size and arrangement was set up using the same parameters for each subject. The prescription was to deliver 45 Gray in 25 fractions. This field arrangement and prescription was then copied over to the breath hold CT sets, and the dosimetric differences were compared using Dose Volume Histograms (DVH). The point of comparison for the three sets was recorded as the percentage volume of kidney receiving less than or equal to 10 Gray. The QUASAR respiratory motion phantom was used with the range of motion determined from the human study. The phantom was imaged, planned and treated with a linear accelerator with dose determined by film. The effect of the motion was measured by the change in the penumbra of the film and compared to the penumbra from the treatment planning system.
Resumo:
Taguchi method is for the first time applied to optimize the synthesis of graphene films by copper-catalyzed decomposition of ethanol. In order to find the most appropriate experimental conditions for the realization of thin high-grade films, six experiments suitably designed and performed. The influence of temperature (1000–1070 °C) and synthesis duration (1–30 min) and hydrogen flow (0–100 sccm) on the number of graphene layers and defect density in the graphitic lattice was ranked by monitoring the intensity of the 2D- and D-bands relative to the G-band in the Raman spectra. After critical examination and adjusting of the conditions predicted to give optimal results, a continuous film consisting of 2–4 nearly defect-free graphene layers was obtained.
Resumo:
The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
Australia is a high-potential country for geothermal power with reserves currently estimated in the tens of millions of petajoules, enough to power the nation for at least 1000 years at current usage. However, these resources are mainly located in isolated arid regions where water is scarce. Therefore, wet cooling systems for geothermal plants in Australia are the least attractive solution and thus air-cooled heat exchangers are preferred. In order to increase the efficiency of such heat exchangers, metal foams have been used. One issue raised by this solution is the fouling caused by dust deposition. In this case, the heat transfer characteristics of the metal foam heat exchanger can dramatically deteriorate. Exploring the particle deposition property in the metal foam exchanger becomes crucial. This paper is a numerical investigation aimed to address this issue. Two dimensional (2D) numerical simulations of a standard one-row tube bundle wrapped with metal foam in cross-flow are performed and highlight preferential particle deposition areas.
Resumo:
Australia is a high potential country for geothermal power with reserves currently estimated in the tens of millions of petajoules, enough to power the nation for at least 1000 years at current usage.However, these resources are mainly located in isolated arid regions where water is scarce. Therefore, wet cooling systems for geothermal plants in Australia are the least attractive solution and thus air-cooled heat exchangers are preferred. In order to increase the efficiency of such heat exchangers, metal foams have been used. One issue raised by this solution is the fouling caused by dust deposition. In this case, the heat transfer characteristics of the metal foam heat exchanger can dramatically deteriorate. Exploring the particle deposition property in the metal foam exchanger becomes crucial. This paper is a numerical investigation aimed to address this issue. Two-dimensional(2D numerical simulations of a standard one-row tube bundle wrapped with metal foam in cross-flow are performed and highlight preferential particle deposition areas.
Resumo:
In vivo confocal microscopy (IVCM) is an emerging technology that provides minimally invasive, high resolution, steady-state assessment of the ocular surface at the cellular level. Several challenges still remain but, at present, IVCM may be considered a promising technique for clinical diagnosis and management. This mini-review summarizes some key findings in IVCM of the ocular surface, focusing on recent and promising attempts to move “from bench to bedside”. IVCM allows prompt diagnosis, disease course follow-up, and management of potentially blinding atypical forms of infectious processes, such as acanthamoeba and fungal keratitis. This technology has improved our knowledge of corneal alterations and some of the processes that affect the visual outcome after lamellar keratoplasty and excimer keratorefractive surgery. In dry eye disease, IVCM has provided new information on the whole-ocular surface morphofunctional unit. It has also improved understanding of pathophysiologic mechanisms and helped in the assessment of prognosis and treatment. IVCM is particularly useful in the study of corneal nerves, enabling description of the morphology, density, and disease- or surgically induced alterations of nerves, particularly the subbasal nerve plexus. In glaucoma, IVCM constitutes an important aid to evaluate filtering blebs, to better understand the conjunctival wound healing process, and to assess corneal changes induced by topical antiglaucoma medications and their preservatives. IVCM has significantly enhanced our understanding of the ocular response to contact lens wear. It has provided new perspectives at a cellular level on a wide range of contact lens complications, revealing findings that were not previously possible to image in the living human eye. The final section of this mini-review provides a focus on advances in confocal microscopy imaging. These include 2D wide-field mapping, 3D reconstruction of the cornea and automated image analysis.
Resumo:
This paper presents an approach to promote the integrity of perception systems for outdoor unmanned ground vehicles (UGV) operating in challenging environmental conditions (presence of dust or smoke). The proposed technique automatically evaluates the consistency of the data provided by two sensing modalities: a 2D laser range finder and a millimetre-wave radar, allowing for perceptual failure mitigation. Experimental results, obtained with a UGV operating in rural environments, and an error analysis validate the approach.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This work aims to promote integrity in autonomous perceptual systems, with a focus on outdoor unmanned ground vehicles equipped with a camera and a 2D laser range finder. A method to check for inconsistencies between the data provided by these two heterogeneous sensors is proposed and discussed. First, uncertainties in the estimated transformation between the laser and camera frames are evaluated and propagated up to the projection of the laser points onto the image. Then, for each pair of laser scan-camera image acquired, the information at corners of the laser scan is compared with the content of the image, resulting in a likelihood of correspondence. The result of this process is then used to validate segments of the laser scan that are found to be consistent with the image, while inconsistent segments are rejected. Experimental results illustrate how this technique can improve the reliability of perception in challenging environmental conditions, such as in the presence of airborne dust.
Resumo:
Synthetic hydrogels selectively decorated with cell adhesion motifs are rapidly emerging as promising substrates for 3D cell culture. When cells are grown in 3D they experience potentially more physiologically relevant cell-cell interactions and physical cues compared with traditional 2D cell culture on stiff surfaces. A newly developed polymer based on poly(2-oxazoline)s has been used for the first time to control attachment of fibroblast cells and is discussed here for its potential use in 3D cell culture with particular focus on cancer cells towards the ultimate aim of high throughput screening of anti-cancer therapies. Advantages and limitations of using poly(2-oxazoline) hydrogels are discussed and compared with more established polymers, especially polyethylene glycol (PEG).
Resumo:
A sub‒domain smoothed Galerkin method is proposed to integrate the advantages of mesh‒free Galerkin method and FEM. Arbitrarily shaped sub‒domains are predefined in problems domain with mesh‒free nodes. In each sub‒domain, based on mesh‒free Galerkin weak formulation, the local discrete equation can be obtained by using the moving Kriging interpolation, which is similar to the discretization of the high‒order finite elements. Strain smoothing technique is subsequently applied to the nodal integration of sub‒domain by dividing the sub‒domain into several smoothing cells. Moreover, condensation of DOF can also be introduced into the local discrete equations to improve the computational efficiency. The global governing equations of present method are obtained on the basis of the scheme of FEM by assembling all local discrete equations of the sub‒domains. The mesh‒free properties of Galerkin method are retained in each sub‒domain. Several 2D elastic problems have been solved on the basis of this newly proposed method to validate its computational performance. These numerical examples proved that the newly proposed sub‒domain smoothed Galerkin method is a robust technique to solve solid mechanics problems based on its characteristics of high computational efficiency, good accuracy, and convergence.