931 resultados para Real-time volume rendering
Resumo:
Coral reefs are biologically complex ecosystems that support a wide variety of marine organisms. These are fragile communities under enormous threat from natural and human-based influences. Properly assessing and measuring the growth and health of reefs is essential to understanding impacts of ocean acidification, coastal urbanisation and global warming. In this paper, we present an innovative 3-D reconstruction technique based on visual imagery as a non-intrusive, repeatable, in situ method for estimating physical parameters, such as surface area and volume for efficient assessment of long-term variability. The reconstruction algorithms are presented, and benchmarked using an existing data set. We validate the technique underwater, utilising a commercial-off-the-shelf camera and a piece of staghorn coral, Acropora cervicornis. The resulting reconstruction is compared with a laser scan of the coral piece for assessment and validation. The comparison shows that 77% of the pixels in the reconstruction are within 0.3 mm of the ground truth laser scan. Reconstruction results from an unknown video camera are also presented as a segue to future applications of this research.
Resumo:
With the advent of live cell imaging microscopy, new types of mathematical analyses and measurements are possible. Many of the real-time movies of cellular processes are visually very compelling, but elementary analysis of changes over time of quantities such as surface area and volume often show that there is more to the data than meets the eye. This unit outlines a geometric modeling methodology and applies it to tubulation of vesicles during endocytosis. Using these principles, it has been possible to build better qualitative and quantitative understandings of the systems observed, as well as to make predictions about quantities such as ligand or solute concentration, vesicle pH, and membrane trafficked. The purpose is to outline a methodology for analyzing real-time movies that has led to a greater appreciation of the changes that are occurring during the time frame of the real-time video microscopy and how additional quantitative measurements allow for further hypotheses to be generated and tested.
Resumo:
Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.
Resumo:
This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis, where risks are detected during process execution. The approach has been realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of negative process states (faults) to eventuate. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a business process management system to prompt the results to process administrators who may take remedial actions. The proposed architecture has been implemented on top of the YAWL system, and evaluated through performance measurements and usability tests with students. The results show that risk conditions can be computed efficiently and that the approach is perceived as useful by the participants in the tests.
Resumo:
Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
PurposeTo extend the previously developed temporally constrained reconstruction (TCR) algorithm to allow for real-time availability of three-dimensional (3D) temperature maps capable of monitoring MR-guided high intensity focused ultrasound applications. MethodsA real-time TCR (RT-TCR) algorithm is developed that only uses current and previously acquired undersampled k-space data from a 3D segmented EPI pulse sequence, with the image reconstruction done in a graphics processing unit implementation to overcome computation burden. Simulated and experimental data sets of HIFU heating are used to evaluate the performance of the RT-TCR algorithm. ResultsThe simulation studies demonstrate that the RT-TCR algorithm has subsecond reconstruction time and can accurately measure HIFU-induced temperature rises of 20 degrees C in 15 s for 3D volumes of 16 slices (RMSE = 0.1 degrees C), 24 slices (RMSE = 0.2 degrees C), and 32 slices (RMSE = 0.3 degrees C). Experimental results in ex vivo porcine muscle demonstrate that the RT-TCR approach can reconstruct temperature maps with 192 x 162 x 66 mm 3D volume coverage, 1.5 x 1.5 x 3.0 mm resolution, and 1.2-s scan time with an accuracy of 0.5 degrees C. ConclusionThe RT-TCR algorithm offers an approach to obtaining large coverage 3D temperature maps in real-time for monitoring MR-guided high intensity focused ultrasound treatments. Magn Reson Med 71:1394-1404, 2014. (c) 2013 Wiley Periodicals, Inc.
Resumo:
We describe developments in the integration of analyte specific holographic sensors into PDMS-based microfluidic devices for the purpose of continuous, low-impact monitoring of extra-cellular change in micro-bioreactors. Holographic sensors respond to analyte concentration via volume change, which makes their reduction in size and integration into spatially confined fluidics difficult. Through design and process modification many of these constraints have been addressed, and a microfluidics-based device capable of real-time monitoring of the pH change caused by Lactobacillus casei fermentation is presented as a general proof-of-concept for a wide array of possible devices.
Resumo:
This article presents results from conventional creep tests (CCT) and two accelerated test methods (the stepped isothermal method (SIM) and the stepped isostress method (SSM)) to determine the creep and creep-rupture behavior of two different aramid fibers, Kevlar 49 and Technora. CCT are regarded as the true behavior of the yarn, but they are impractical for long-term use where failures are expected only after many years. All the tests were carried out on the same batches of yarns, and using the same clamping arrangements, so the tests should be directly comparable. For both materials, SIM testing gives good agreement with CCT and gave stress-rupture lifetimes that followed the same trend. However, there was significant variation for SSM testing, especially when testing Technora fibers. The results indicate that Kevlar has a creep strain capacity that is almost independent of stress, whereas Technora shows a creep strain capacity that depends on stress. Its creep strain capacity is approximately two to three times that of Kevlar 49. The accelerated test methods give indirect estimates for the activation energy and the activation volume of the fibers. The activation energy for Technora is about 20% higher than that for Kevlar, meaning that it is less sensitive to the effects of increasing temperature. The activation volume for both materials was similar, and in both cases, stress dependent. Copyright © 2012 Wiley Periodicals, Inc.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
In this paper the effectiveness of a novel method of computer assisted pedicle screw insertion was studied using testing of hypothesis procedure with a sample size of 48. Pattern recognition based on geometric features of markers on the drill has been performed on real time optical video obtained from orthogonally placed CCD cameras. The study reveals the exactness of the calculated position of the drill using navigation based on CT image of the vertebra and real time optical video of the drill. The significance value is 0.424 at 95% confidence level which indicates good precision with a standard mean error of only 0.00724. The virtual vision method is less hazardous to both patient and the surgeon
Resumo:
Climate model projections show that climate change will further increase the risk of flooding in many regions of the world. There is a need for climate adaptation, but building new infrastructure or additional retention basins has its limits, especially in densely populated areas where open spaces are limited. Another solution is the more efficient use of the existing infrastructure. This research investigates a method for real-time flood control by means of existing gated weirs and retention basins. The method was tested for the specific study area of the Demer basin in Belgium but is generally applicable. Today, retention basins along the Demer River are controlled by means of adjustable gated weirs based on fixed logic rules. However, because of the high complexity of the system, only suboptimal results are achieved by these rules. By making use of precipitation forecasts and combined hydrological-hydraulic river models, the state of the river network can be predicted. To fasten the calculation speed, a conceptual river model was used. The conceptual model was combined with a Model Predictive Control (MPC) algorithm and a Genetic Algorithm (GA). The MPC algorithm predicts the state of the river network depending on the positions of the adjustable weirs in the basin. The GA generates these positions in a semi-random way. Cost functions, based on water levels, were introduced to evaluate the efficiency of each generation, based on flood damage minimization. In the final phase of this research the influence of the most important MPC and GA parameters was investigated by means of a sensitivity study. The results show that the MPC-GA algorithm manages to reduce the total flood volume during the historical event of September 1998 by 46% in comparison with the current regulation. Based on the MPC-GA results, some recommendations could be formulated to improve the logic rules.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)