995 resultados para Patron driven acquisition
Resumo:
INTRODUCTION: The characterization of urinary calculi using noninvasive methods has the potential to affect clinical management. CT remains the gold standard for diagnosis of urinary calculi, but has not reliably differentiated varying stone compositions. Dual-energy CT (DECT) has emerged as a technology to improve CT characterization of anatomic structures. This study aims to assess the ability of DECT to accurately discriminate between different types of urinary calculi in an in vitro model using novel postimage acquisition data processing techniques. METHODS: Fifty urinary calculi were assessed, of which 44 had >or=60% composition of one component. DECT was performed utilizing 64-slice multidetector CT. The attenuation profiles of the lower-energy (DECT-Low) and higher-energy (DECT-High) datasets were used to investigate whether differences could be seen between different stone compositions. RESULTS: Postimage acquisition processing allowed for identification of the main different chemical compositions of urinary calculi: brushite, calcium oxalate-calcium phosphate, struvite, cystine, and uric acid. Statistical analysis demonstrated that this processing identified all stone compositions without obvious graphical overlap. CONCLUSION: Dual-energy multidetector CT with postprocessing techniques allows for accurate discrimination among the main different subtypes of urinary calculi in an in vitro model. The ability to better detect stone composition may have implications in determining the optimum clinical treatment modality for urinary calculi from noninvasive, preprocedure radiological assessment.
Resumo:
We present theoretical, numerical, and experimental analyses on the non-linear dynamic behavior of superparamagnetic beads exposed to a periodic array of micro-magnets and an external rotating field. The agreement between theoretical and experimental results revealed that non-linear magnetic forcing dynamics are responsible for transitions between phase-locked orbits, sub-harmonic orbits, and closed orbits, representing different mobility regimes of colloidal beads. These results suggest that the non-linear behavior can be exploited to construct a novel colloidal separation device that can achieve effectively infinite separation resolution for different types of beads, by exploiting minor differences in their bead's properties. We also identify a unique set of initial conditions, which we denote the "devil's gate" which can be used to expeditiously identify the full range of mobility for a given bead type.
Resumo:
In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is modulated and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.
Resumo:
A framework for adaptive and non-adaptive statistical compressive sensing is developed, where a statistical model replaces the standard sparsity model of classical compressive sensing. We propose within this framework optimal task-specific sensing protocols specifically and jointly designed for classification and reconstruction. A two-step adaptive sensing paradigm is developed, where online sensing is applied to detect the signal class in the first step, followed by a reconstruction step adapted to the detected class and the observed samples. The approach is based on information theory, here tailored for Gaussian mixture models (GMMs), where an information-theoretic objective relationship between the sensed signals and a representation of the specific task of interest is maximized. Experimental results using synthetic signals, Landsat satellite attributes, and natural images of different sizes and with different noise levels show the improvements achieved using the proposed framework when compared to more standard sensing protocols. The underlying formulation can be applied beyond GMMs, at the price of higher mathematical and computational complexity. © 1991-2012 IEEE.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
Research on future episodic thought has produced compelling theories and results in cognitive psychology, cognitive neuroscience, and clinical psychology. In experiments aimed to integrate these with basic concepts and methods from autobiographical memory research, 76 undergraduates remembered past and imagined future positive and negative events that had or would have a major impact on them. Correlations of the online ratings of visual and auditory imagery, emotion, and other measures demonstrated that individuals used the same processes to the same extent to remember past and construct future events. These measures predicted the theoretically important metacognitive judgment of past reliving and future "preliving" in similar ways. On standardized tests of reactions to traumatic events, scores for future negative events were much higher than scores for past negative events. The scores for future negative events were in the range that would qualify for a diagnosis of posttraumatic stress disorder (PTSD); the test was replicated (n = 52) to check for order effects. Consistent with earlier work, future events had less sensory vividness. Thus, the imagined symptoms of future events were unlikely to be caused by sensory vividness. In a second experiment, to confirm this, 63 undergraduates produced numerous added details between 2 constructions of the same negative future events; deficits in rated vividness were removed with no increase in the standardized tests of reactions to traumatic events. Neuroticism predicted individuals' reactions to negative past events but did not predict imagined reactions to future events. This set of novel methods and findings is interpreted in the contexts of the literatures of episodic future thought, autobiographical memory, PTSD, and classic schema theory.
Resumo:
© 2015. American Geophysical Union. All Rights Reserved.The role of surface and advective heat fluxes on buoyancy-driven circulation was examined within a tropical coral reef system. Measurements of local meteorological conditions as well as water temperature and velocity were made at six lagoon locations for 2 months during the austral summer. We found that temperature rather than salinity dominated buoyancy in this system. The data were used to calculate diurnally phase-averaged thermal balances. A one-dimensional momentum balance developed for a portion of the lagoon indicates that the diurnal heating pattern and consistent spatial gradients in surface heat fluxes create a baroclinic pressure gradient that is dynamically important in driving the observed circulation. The baroclinic and barotropic pressure gradients make up 90% of the momentum budget in part of the system; thus, when the baroclinic pressure gradient decreases 20% during the day due to changes in temperature gradient, this substantially changes the circulation, with different flow patterns occurring during night and day. Thermal balances computed across the entire lagoon show that the spatial heating patterns and resulting buoyancy-driven circulation are important in maintaining a persistent advective export of heat from the lagoon and for enhancing ocean-lagoon exchange.
Resumo:
Observations of waves, setup, and wave-driven mean flows were made on a steep coral forereef and its associated lagoonal system on the north shore of Moorea, French Polynesia. Despite the steep and complex geometry of the forereef, and wave amplitudes that are nearly equal to the mean water depth, linear wave theory showed very good agreement with data. Measurements across the reef illustrate the importance of including both wave transport (owing to Stokes drift), as well as the Eulerian mean transport when computing the fluxes over the reef. Finally, the observed setup closely follows the theoretical relationship derived from classic radiation stress theory, although the two parameters that appear in the model-one reflecting wave breaking, the other the effective depth over the reef crest-must be chosen to match theory to data. © 2013 American Meteorological Society.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
An MHD flow is considered which is relevant to horizontal Bridgman technique for crystal growth from a melt. In the unidirectional parallel flow approximation an analytical solution is found accounting for the finite rectangular cross section of the channel in the case of a vertical magnetic field. Numerical pseudo-spectral solutions are used in the cases of arbitrary magnetic field and gravity vector orientations. The vertical magnetic field (parallel to the gravity) is found to be he most effective to damp the flow, however, complicated flow profiles with "overvelocities" in the comers are typical in the case of a finite cross-section channel. The temperature distribution is shown to be dependent on the flow profile. The linear stability of the flow is investigated by use of the Chebyshev pseudospectral method. For the case of an infinite width channel the transversal rolls instability is investigated, and for the finite cross-section channel the longitudinal rolls instability is considered. The critical Gr number values are computed in the dependence of the Ha number and the wave number or the aspect ratio in the case of finite section.
Resumo:
This paper describes work towards the deployment of self-managing capabilities into an advanced middleware for automotive systems. The middleware will support a range of futuristic use-cases requiring context-awareness and dynamic system configuration. Several use-cases are described and their specific context-awareness requirements identified. The discussion is accompanied by a justification for the selection of policy-based computing as the autonomics technique to drive the self-management. The specific policy technology to be deployed is described briefly, with a focus on its specific features that are of direct relevance to the middleware project. A selected use-case is explored in depth to illustrate the extent of dynamic behaviour achievable in the proposed middleware architecture, which is composed of several policy-configured services. An early demonstration application which facilitates concept evaluation is presented and a sequence of typical device-discovery events is worked through
Resumo:
Optimisation in wireless sensor networks is necessary due to the resource constraints of individual devices, bandwidth limits of the communication channel, relatively high probably of sensor failure, and the requirement constraints of the deployed applications in potently highly volatile environments. This paper presents BioANS, a protocol designed to optimise a wireless sensor network for resource efficiency as well as to meet a requirement common to a whole class of WSN applications - namely that the sensor nodes are dynamically selected on some qualitative basis, for example the quality by which they can provide the required context information. The design of BioANS has been inspired by the communication mechanisms that have evolved in natural systems. The protocol tolerates randomness in its environment, including random message loss, and incorporates a non-deterministic ’delayed-bids’ mechanism. A simulation model is used to explore the protocol’s performance in a wide range of WSN configurations. Characteristics evaluated include tolerance to sensor node density and message loss, communication efficiency, and negotiation latency .
Resumo:
Key Terms in Second Language Acquisition includes definitions of key terms within second language acquisition, and also provides accessible summaries of the key issues within this complex area of study. The final section presents a list of key readings in second language acquisition that signposts the reader towards classic articles and also provides a springboard to further study.
Resumo:
Book review of: J. Liceras, H. Zobl, and H. Goodluck (eds.), 2008, The Role of Formal Features in Second Language Acquisition. London/New York: Lawrence Erlbaum Associates, 577 pages, ISBN: 0-8058-5354-5.
Resumo:
Research on Processing Instruction has so far investigated the primary effects of Processing Instruction. In this book, the results of a series of experimental studies investigating possible secondary and cumulative effects of Processing Instruction on the acquisition of French, Italian and English as a second language will be presented. The results of the three experiments have demonstrated that Processing Instruction not only provides learners the direct or primary benefit of learning to process and produce the morphological form on which they received instruction, but also a secondary benefit in that they transferred that training to processing and producing another morphological form on which they had received no instruction.