939 resultados para spatiotemporal entropic thresholding


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the ocean science community, researchers have begun employing novel sensor platforms as integral pieces in oceanographic data collection, which have significantly advanced the study and prediction of complex and dynamic ocean phenomena. These innovative tools are able to provide scientists with data at unprecedented spatiotemporal resolutions. This paper focuses on the newly developed Wave Glider platform from Liquid Robotics. This vehicle produces forward motion by harvesting abundant natural energy from ocean waves, and provides a persistent ocean presence for detailed ocean observation. This study is targeted at determining a kinematic model for offline planning that provides an accurate estimation of the vehicle speed for a desired heading and set of environmental parameters. Given the significant wave height, ocean surface and subsurface currents, wind speed and direction, we present the formulation of a system identification to provide the vehicle’s speed over a range of possible directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol.2, 117–E (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ- DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A system to segment and recognize Australian 4-digit postcodes from address labels on parcels is described. Images of address labels are preprocessed and adaptively thresholded to reduce noise. Projections are used to segment the line and then the characters comprising the postcode. Individual digits are recognized using bispectral features extracted from their parallel beam projections. These features are insensitive to translation, scaling and rotation, and robust to noise. Results on scanned images are presented. The system is currently being improved and implemented to work on-line.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximately 20 years have passed now since the NTSB issued its original recommendation to expedite development, certification and production of low-cost proximity warning and conflict detection systems for general aviation [1]. While some systems are in place (TCAS [2]), ¡¨see-and-avoid¡¨ remains the primary means of separation between light aircrafts sharing the national airspace. The requirement for a collision avoidance or sense-and-avoid capability onboard unmanned aircraft has been identified by leading government, industry and regulatory bodies as one of the most significant challenges facing the routine operation of unmanned aerial systems (UAS) in the national airspace system (NAS) [3, 4]. In this thesis, we propose and develop a novel image-based collision avoidance system to detect and avoid an upcoming conflict scenario (with an intruder) without first estimating or filtering range. The proposed collision avoidance system (CAS) uses relative bearing ƒÛ and angular-area subtended ƒê , estimated from an image, to form a test statistic AS C . This test statistic is used in a thresholding technique to decide if a conflict scenario is imminent. If deemed necessary, the system will command the aircraft to perform a manoeuvre based on ƒÛ and constrained by the CAS sensor field-of-view. Through the use of a simulation environment where the UAS is mathematically modelled and a flight controller developed, we show that using Monte Carlo simulations a probability of a Mid Air Collision (MAC) MAC RR or a Near Mid Air Collision (NMAC) RiskRatio can be estimated. We also show the performance gain this system has over a simplified version (bearings-only ƒÛ ). This performance gain is demonstrated in the form of a standard operating characteristic curve. Finally, it is shown that the proposed CAS performs at a level comparable to current manned aviations equivalent level of safety (ELOS) expectations for Class E airspace. In some cases, the CAS may be oversensitive in manoeuvring the owncraft when not necessary, but this constitutes a more conservative and therefore safer, flying procedures in most instances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to determine whether spatiotemporal interactions between footballers and the ball in 1 vs. 1 sub-phases are influenced by their proximity to the goal area. Twelve participants (age 15.3 ± 0.5 years) performed as attackers and defenders in 1 vs. 1 dyads across three field positions: (a) attacking the goal, (b) in midfield, and (c) advancing away from the goal area. In each position, the dribbler was required to move beyond an immediate defender with the ball towards the opposition goal. Interactions of attacker-defender dyads were filmed with player and ball displacement trajectories digitized using manual tracking software. One-way repeated measures analysis of variance was used to examine differences in mean defender-to-ball distance after this value had stabilized. Maximum attacker-to-ball distance was also compared as a function of proximity-to-goal. Significant differences were observed for defender-to-ball distance between locations (a) and (c) at the moment when the defender-to-ball distance had stabilized (a: 1.69 ± 0.64 m; c: 1.15 ± 0.59 m; P < 0.05). Findings indicate that proximity-to-goal influenced the performance of players, particularly when attacking or advancing away from goal areas, providing implications for training design in football. In this study, the task constraints of football revealed subtly different player interactions than observed in previous studies of dyadic systems in basketball and rugby union.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Establishing a persistent presence in the ocean with an Autonomous Underwater Vehicle capable of observing temporal variability of large-scale ocean processes requires a unique sensor platform. In this paper, we examine the utility of Lagrangian profiling floats for such extended deployments. We propose a strategy that utilizes ocean model predictions to facilitate a basic level of autonomy to achieve general control of this minimally-actuated underwater vehicle. We extend experimentally validated techniques for utilising ocean current models to control under-actuated autonomous underwater vehicles by presenting this investigation into the application of these methods on profiling floats. With the appropriate vertical actuation, and utilising spatiotemporal variations in water speed and direction, we show that broad controllability results can be met. First, we apply an A* planner to a local controllability map generated from predictions of ocean currents. This computes a path between start and goal waypoints that has the highest likelihood of successful execution over a given duration. The computed depth plan is generated with a model predictive controller, and selects the depths for the vehicle so that ambient currents guide it toward the goal. Mission constraints are included to simulate and motivate a practical data collection mission. Results are presented in simulation for a mission off the coast of Los Angeles, CA USA, that show surprising results in the ability of a drifting vehicle to maintain a prescribed course and reach a desired location.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean processes are complex and have high variability in both time and space. Thus, ocean scientists must collect data over long time periods to obtain a synoptic view of ocean processes and resolve their spatiotemporal variability. One way to perform these persistent observations is to utilise an autonomous vehicle that can remain on deployment for long time periods. However, such vehicles are generally underactuated and slow moving. A challenge for persistent monitoring with these vehicles is dealing with currents while executing a prescribed path or mission. Here we present a path planning method for persistent monitoring that exploits ocean currents to increase navigational accuracy and reduce energy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The finite element (FE) analysis is an effective method to study the strength and predict the fracture risk of endodontically-treated teeth. This paper presents a rapid method developed to generate a comprehensive tooth FE model using data retrieved from micro-computed tomography (μCT). With this method, the inhomogeneity of material properties of teeth was included into the model without dividing the tooth model into different regions. The material properties of the tooth were assumed to be related to the mineral density. The fracture risk at different tooth portions was assessed for root canal treatments. The micro-CT images of a tooth were processed by a Matlab software programme and the CT numbers were retrieved. The tooth contours were obtained with thresholding segmentation using Amira. The inner and outer surfaces of the tooth were imported into Solidworks and a three-dimensional (3D) tooth model was constructed. An assembly of the tooth model with the periodontal ligament (PDL) layer and surrounding bone was imported into ABAQUS. The material properties of the tooth were calculated from the retrieved CT numbers via ABAQUS user's subroutines. Three root canal geometries (original and two enlargements) were investigated. The proposed method in this study can generate detailed 3D finite element models of a tooth with different root canal enlargements and filling materials, and would be very useful for the assessment of the fracture risk at different tooth portions after root canal treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A building information model (BIM) is an electronic repository of structured, three-dimensional data that captures both the physical and dynamic functional characteristics of a facility. In addition to its more traditional function as a tool to aid design and construction, a BIM can be used throughout the life cycle of a facility, functioning as a living database that places resources contained within the building in their spatial and temporal context. Through its comprehension of spatial relationships, a BIM can meaningfully represent and integrate previously isolated control and management systems and processes, and thereby provide a more intuitive interface to users. By placing processes in a spatial context, decision-making can be improved, with positive flow-on effects for security and efficiency. In this article, we systematically analyse the authorization requirements involved in the use of BIMs. We introduce the concept of using a BIM as a graphical tool to support spatial access control configuration and management (including physical access control). We also consider authorization requirements for regulating access to the structured data that exists within a BIM as well as to external systems and data repositories that can be accessed via the BIM interface. With a view to addressing these requirements we present a survey of relevant spatiotemporal access control models, focusing on features applicable to BIMs and highlighting capability gaps. Finally, we present a conceptual authorization framework that utilizes BIMs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.