937 resultados para Space-Time Symmetries
Resumo:
Background and objective The time course of cardiopulmonary alterations after pulmonary embolism has not been clearly demonstrated and nor has the role of systemic inflammation on the pathogenesis of the disease. This study aimed to evaluate over 12 h the effects of pulmonary embolism caused by polystyrene microspheres on the haemodynamics, lung mechanics and gas exchange and on interleukin-6 production. Methods Ten large white pigs (weight 35-42 kg) had arterial and pulmonary catheters inserted and pulmonary embolism was induced in five pigs by injection of polystyrene microspheres (diameter similar to 300 mu mol l(-1)) until a value of pulmonary mean arterial pressure of twice the baseline was obtained. Five other animals received only saline. Haemodynamic and respiratory data and pressure-volume curves of the respiratory system were collected. A bronchoscopy was performed before and 12 h after embolism, when the animals were euthanized. Results The embolism group developed hypoxaemia that was not corrected with high oxygen fractions, as well as higher values of dead space, airway resistance and lower respiratory compliance levels. Acute haemodynamic alterations included pulmonary arterial hypertension with preserved systemic arterial pressure and cardiac index. These derangements persisted until the end of the experiments. The plasma interleukin-6 concentrations were similar in both groups; however, an increase in core temperature and a nonsignificant higher concentration of bronchoalveolar lavage proteins were found in the embolism group. Conclusion Acute pulmonary embolism induced by polystyrene microspheres in pigs produces a 12-h lasting hypoxaemia and a high dead space associated with high airway resistance and low compliance. There were no plasma systemic markers of inflammation, but a higher central temperature and a trend towards higher bronchoalveolar lavage proteins were found. Eur J Anaesthesiol 27:67-76 (C) 2010 European Society of Anaesthesiology.
Resumo:
Introduction: This ex vivo study evaluated the heat release, time required, and cleaning efficacy of MTwo (VDW, Munich, Germany) and ProTaper Universal Retreatment systems (Dentsply/Maillefer, Ballaigues, Switzerland) and hand instrumentation in the removal of filling material. Methods: Sixty single-rooted human teeth with a single straight canal were obturated with gutta-percha and zinc oxide and eugenol-based cement and randomly allocated to 3 groups (n = 20). After 30-day storage at 37 degrees C and 100% humidity, the root fillings were removed using ProTaper UR, MTwo R, or hand files. Heat release, time required, and cleaning efficacy data were analyzed statistically (analysis of variance and the Tukey test, alpha = 0.05). Results: None of the techniques removed the root fillings completely. Filling material removal with ProTaper UR was faster but caused more heat release. Mtwo R produced less heat release than the other techniques but was the least efficient in removing gutta-percha/sealer. Conclusions: ProTaper UR and MTwo R caused the greatest and lowest temperature increase on root surface, respectively; regardless of the type of instrument, more heat was released in the cervical third. Pro Taper UR needed less time to remove fillings than MTwo R. All techniques left filling debris in the root canals. (I Endod 2010;36:1870-1873)
Resumo:
Objectives: The purpose of this in vitro study was to evaluate the Vickers hardness (VHN) of a Light Core (Bisco) composite resin after root reinforcement, according to the light exposure time, region of intracanal reinforcement and lateral distance from the light-transmitting fibre post. Methods: Forty-five 17-mm long roots were used. Twenty-four hours after obturation, the root canals were emptied to a depth of 12 mm and the root dentine was artificially flared to produce a 1 mm space between the fibre post and the canal walls. The roots were bulk restored with the composite resin, which was photoactivated through the post for 40 s (G1, control), 80 s (G2) or 120 s (G3). Twenty-four hours after post-cementation, the specimens were sectioned transversely into three slices at depths of 2, 6 and 10 mm, corresponding to the coronal, middle and apical regions of the reinforced root. Composite VHN was measured as the average of three indentations (100 g/15 s) in each region at lateral distances of 50, 200 and 350 mu m from the cement/post-interface. Results: Three-way analysis of variance (alpha = 0.05) indicated that the factors time, region and distance influenced the hardness and that the interaction time x region was statistically significant (p = 0.0193). Tukey`s test showed that the mean VHN values for G1 (76.37 +/- 8.58) and G2 (74.89 +/- 6.28) differed significantly from that for G3 (79.5 +/- 5.18). Conclusions: Composite resin hardness was significantly lower in deeper regions of root reinforcement and in lateral areas distant from the post. Overall, a light exposure time of 120 s provided higher composite hardness than the shorter times (40 and 80 s). (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A scheme is presented to incorporate a mixed potential integral equation (MPIE) using Michalski's formulation C with the method of moments (MoM) for analyzing the scattering of a plane wave from conducting planar objects buried in a dielectric half-space. The robust complex image method with a two-level approximation is used for the calculation of the Green's functions for the half-space. To further speed up the computation, an interpolation technique for filling the matrix is employed. While the induced current distributions on the object's surface are obtained in the frequency domain, the corresponding time domain responses are calculated via the inverse fast Fourier transform (FFT), The complex natural resonances of targets are then extracted from the late time response using the generalized pencil-of-function (GPOF) method. We investigate the pole trajectories as we vary the distance between strips and the depth and orientation of single, buried strips, The variation from the pole position of a single strip in a homogeneous dielectric medium was only a few percent for most of these parameter variations.
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This note presents a method of evaluating the distribution of a path integral for Markov chains on a countable state space.
Resumo:
This paper presents a method of evaluating the expected value of a path integral for a general Markov chain on a countable state space. We illustrate the method with reference to several models, including birth-death processes and the birth, death and catastrophe process. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
In this work, we present a neural network (NN) based method designed for 3D rigid-body registration of FMRI time series, which relies on a limited number of Fourier coefficients of the images to be aligned. These coefficients, which are comprised in a small cubic neighborhood located at the first octant of a 3D Fourier space (including the DC component), are then fed into six NN during the learning stage. Each NN yields the estimates of a registration parameter. The proposed method was assessed for 3D rigid-body transformations, using DC neighborhoods of different sizes. The mean absolute registration errors are of approximately 0.030 mm in translations and 0.030 deg in rotations, for the typical motion amplitudes encountered in FMRI studies. The construction of the training set and the learning stage are fast requiring, respectively, 90 s and 1 to 12 s, depending on the number of input and hidden units of the NN. We believe that NN-based approaches to the problem of FMRI registration can be of great interest in the future. For instance, NN relying on limited K-space data (possibly in navigation echoes) can be a valid solution to the problem of prospective (in frame) FMRI registration.
Resumo:
The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
Real-time systems demand guaranteed and predictable run-time behaviour in order to ensure that no task has missed its deadline. Over the years we are witnessing an ever increasing demand for functionality enhancements in the embedded real-time systems. Along with the functionalities, the design itself grows more complex. Posed constraints, such as energy consumption, time, and space bounds, also require attention and proper handling. Additionally, efficient scheduling algorithms, as proven through analyses and simulations, often impose requirements that have significant run-time cost, specially in the context of multi-core systems. In order to further investigate the behaviour of such systems to quantify and compare these overheads involved, we have developed the SPARTS, a simulator of a generic embedded real- time device. The tasks in the simulator are described by externally visible parameters (e.g. minimum inter-arrival, sporadicity, WCET, BCET, etc.), rather than the code of the tasks. While our current implementation is primarily focused on our immediate needs in the area of power-aware scheduling, it is designed to be extensible to accommodate different task properties, scheduling algorithms and/or hardware models for the application in wide variety of simulations. The source code of the SPARTS is available for download at [1].
Resumo:
Existing work in the context of energy management for real-time systems often ignores the substantial cost of making DVFS and sleep state decisions in terms of time and energy and/or assume very simple models. Within this paper we attempt to explore the parameter space for such decisions and possible constraints faced.
Resumo:
Signal Processing, Vol. 83, nº 11
Resumo:
Dynamically reconfigurable systems have benefited from a new class of FPGAs recently introduced into the market, which allow partial and dynamic reconfiguration at run-time, enabling multiple independent functions from different applications to share the same device, swapping resources as needed. When the sequence of tasks to be performed is not predictable, resource allocation decisions have to be made on-line, fragmenting the FPGA logic space. A rearrangement may be necessary to get enough contiguous space to efficiently implement incoming functions, to avoid spreading their components and, as a result, degrading their performance. This paper presents a novel active replication mechanism for configurable logic blocks (CLBs), able to implement on-line rearrangements, defragmenting the available FPGA resources without disturbing those functions that are currently running.