211 resultados para Hydrus-2d
Resumo:
In this paper we construct earthwork allocation plans for a linear infrastructure road project. Fuel consumption metrics and an innovative block partitioning and modelling approach are applied to reduce costs. 2D and 3D variants of the problem were compared to see what effect, if any, occurs on solution quality. 3D variants were also considered to see what additional complexities and difficulties occur. The numerical investigation shows a significant improvement and a reduction in fuel consumption as theorised. The proposed solutions differ considerably from plans that were constructed for a distance based metric as commonly used in other approaches. Under certain conditions, 3D problem instances can be solved optimally as 2D problems.
Resumo:
ABSTRACT Objective: Ureaplasma parvum colonization in the setting of polymicrobial flora is common in women with chorioamnionitis, and is a risk factor for preterm delivery and neonatal morbidity. We hypothesized that ureaplasma colonization of amniotic fluid will modulate chorioamnionitis induced by E.coli lipopolysaccharide (LPS). Methods: Sheep received intra-amniotic (IA) injections of media (control) or live ureaplasma either 7 or 70d before delivery. Another group received IA LPS 2d before delivery. To test for interactions, U.parvum exposed animals were challenged with IA LPS, and delivered 2d later. All animals were delivered preterm at 125±1 day gestation. Results: Both IA ureaplasmas and LPS induced leukocyte infiltration of chorioamnion. LPS greatly increased the expression of pro-inflammatory cytokines and myeloperoxidase in leukocytes, while ureaplasmas alone caused modest responses. Interestingly, 7d but not 70d ureaplasma exposure significantly downregulated LPS induced pro-inflammatory cytokines and myeloperoxidase expression in the chorioamnion. Conclusion: U.parvum can suppress LPS induced experimental chorioamnionitis.
Resumo:
Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.
Resumo:
Molecular modelling has become a useful and widely applied tool to investigate separation and diffusion behavior of gas molecules through nano-porous low dimensional carbon materials, including quasi-1D carbon nanotubes and 2D graphene-like carbon allotropes. These simulations provide detailed, molecular level information about the carbon framework structure as well as dynamics and mechanistic insights, i.e. size sieving, quantum sieving, and chemical affinity sieving. In this perspective, we revisit recent advances in this field and summarize separation mechanisms for multicomponent systems from kinetic and equilibrium molecular simulations, elucidating also anomalous diffusion effects induced by the confining pore structure and outlining perspectives for future directions in this field.
Resumo:
Background: Measurement accuracy is critical for biomechanical gait assessment. Very few studies have determined the accuracy of common clinical rearfoot variables between cameras with different collection frequencies. Research question: What is the measurement error for common rearfoot gait parameters when using a standard 30Hz digital camera compared to 100Hz camera? Type of study: Descriptive. Methods: 100 footfalls were recorded from 10 subjects ( 10 footfalls per subject) running on a treadmill at 2.68m/s. A high-speed digital timer, accurate within 1ms served as an external reference. Markers were placed along the vertical axis of the heel counter and the long axis of the shank. 2D coordinates for the four markers were determined from heel strike to heel lift. Variables of interest included time of heel strike (THS), time of heel lift (THL), time to maximum eversion (TMax), and maximum rearfoot eversion angle (EvMax). Results: THS difference was 29.77ms (+/- 8.77), THL difference was 35.64ms (+/- 6.85), and TMax difference was 16.50ms (+/- 2.54). These temporal values represent a difference equal to 11.9%, 14.3%, and 6.6% of the stance phase of running gait, respectively. EvMax difference was 1.02 degrees (+/- 0.46). Conclusions: A 30Hz camera is accurate, compared to a high-frequency camera, in determining TMax and EvMax during a clinical gait analysis. However, relatively large differences, in excess of 12% of the stance phase of gait, for THS and THL variables were measured.
Resumo:
Introduction: The Trendelenburg Test (TT) is used to assess the functional strength of the hip abductor muscles (HABD), their ability to control frontal plane motion of the pelvis, and the ability of the lumbopelvic complex to transfer load into single leg stance. Rationale: Although a standard method to perform the test has been described for use within clinical populations, no study has directly investigated Trendelenburg’s hypotheses. Purpose: To investigate the validity of the TT using an ultrasound guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in HABD strength would result in the theorized mechanical compensatory strategies measured during the TT. Methods: Quasi-experimental design using a convenience sample of nine healthy males. Only subjects with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Force dynamometry was used to evaluation HABD strength (%BW). 2D mechanics were used to evaluate contralateral pelvic drop (cMPD), change in contralateral pelvic drop (∆cMPD), ipsilateral hip adduction (iHADD) and ipsilateral trunk sway (TRUNK) measured in degrees (°). All measures were collected prior to and following a UNB on the superior gluteal nerve performed by an interventional radiologist. Results: Subjects’ age was median 31yrs (IQR:22-32yrs); and weight was median 73kg (IQR:67-81kg). An average 52% reduction of HABD strength (z=2.36,p=0.02) resulted following the UNB. No differences were found in cMPD or ∆cMPD (z=0.01,p= 0.99, z=-0.67,p=0.49). Individual changes in biomechanics show no consistency between subjects and non-systematic changes across the group. One subject demonstrated the mechanical compensations described by Trendelenburg. Discussion: The TT should not be used as screening measure for HABD strength in populations demonstrating strength greater than 30%BW but reserved for use with populations with marked HABD weakness. Importance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.
Resumo:
Video presented as part of BPM2011 demonstration(France). In this video we show a prototype BPMN process modelling tool which uses Augmented Reality techniques to increase the sense of immersion when editing a process model. The avatar represents a remotely logged in user, and facilitates greater insight into the editing actions of the collaborator than present 2D web-based approaches in collaborative process modelling. We modified the Second Life client to integrate the ARToolkit in order to support pattern-based AR.
Resumo:
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.
Resumo:
In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques.
Resumo:
Osteocyte cells are the most abundant cells in human bone tissue. Due to their unique morphology and location, osteocyte cells are thought to act as regulators in the bone remodelling process, and are believed to play an important role in astronauts’ bone mass loss after long-term space missions. There is increasing evidence showing that an osteocyte’s functions are highly affected by its morphology. However, changes in an osteocyte’s morphology under an altered gravity environment are still not well documented. Several in vitro studies have been recently conducted to investigate the morphological response of osteocyte cells to the microgravity environment, where osteocyte cells were cultured on a two-dimensional flat surface for at least 24 hours before microgravity experiments. Morphology changes of osteocyte cells in microgravity were then studied by comparing the cell area to 1g control cells. However, osteocyte cells found in vivo are with a more 3D morphology, and both cell body and dendritic processes are found sensitive to mechanical loadings. A round shape osteocyte’s cells support a less stiff cytoskeleton and are more sensitive to mechanical stimulations compared with flat cellular morphology. Thus, the relative flat and spread shape of isolated osteocytes in 2D culture may greatly hamper their sensitivity to a mechanical stimulus, and the lack of knowledge on the osteocyte’s morphological characteristics in culture may lead to subjective and noncomprehensive conclusions of how altered gravity impacts on an osteocyte’s morphology. Through this work empirical models were developed to quantitatively predicate the changes of morphology in osteocyte cell lines (MLO-Y4) in culture, and the response of osteocyte cells, which are relatively round in shape, to hyper-gravity stimulation has also been investigated. The morphology changes of MLO-Y4 cells in culture were quantified by measuring cell area and three dimensionless shape features including aspect ratio, circularity and solidity by using widely accepted image analysis software (ImageJTM). MLO-Y4 cells were cultured at low density (5×103 per well) and the changes in morphology were recorded over 10 hours. Based on the data obtained from the imaging analysis, empirical models were developed using the non-linear regression method. The developed empirical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analysing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary. The morphological response of MLO-Y4 cells with a relatively round morphology to hyper-gravity environment has been investigated using a centrifuge. After 2 hours culture, MLO-Y4 cells were exposed to 20g for 30mins. Changes in the morphology of MLO-Y4 cells are quantitatively analysed by measuring the average value of cell area and dimensionless shape factors such as aspect ratio, solidity and circularity. In this study, no significant morphology changes were detected in MLO-Y4 cells under a hyper-gravity environment (20g for 30 mins) compared with 1g control cells.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information overlays. In addition, very few of these representations have undergone a thorough analysis or design process with reference to psychological theories on data and process visualization. This dearth of visualization research, we believe, has led to problems with BPM uptake in some organizations, as the representations can be difficult for stakeholders to understand, and thus remains an open research question for the BPM community. In addition, business analysts and process modeling experts themselves need visual representations that are able to assist with key BPM life cycle tasks in the process of generating optimal solutions. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment have much potential in areas of BPM; to engage, provide insight, and to promote collaboration amongst analysts and stakeholders alike. We believe this is a timely topic, with research emerging in a number of places around the globe, relevant to this workshop. This is the second TAProViz workshop being run at BPM. The intention this year is to consolidate on the results of last year's successful workshop by further developing this important topic, identifying the key research topics of interest to the BPM visualization community.