917 resultados para Experimental evaluation
Resumo:
The present work aims to investigate the phase transition, dispersion and diffusion behavior of nanocomposites of carbon nanotube (CNT) and straight chain alkanes. These materials are potential candidates for organic phase change materials(PCMs) and have attracted flurry of research recently. Accurate experimental evaluation of the mass, thermal and transport properties of such composites is both difficult as well as economically taxing. Additionally it is crucial to understand the factors that results in modification or enhancement of their characteristic at atomic or molecular level. Classical molecular dynamics approach has been extended to elucidate the same. Bulk atomistic models have been generated and subjected to rigorous multistage equilibration. To reaffirm the approach, both canonical and constant-temperature, constant-pressure ensembles were employed to simulate the models under consideration. Explicit determination of kinetic, potential, non-bond and total energy assisted in understanding the enhanced thermal and transport property of the nanocomposites from molecular point of view. Crucial parameters including mean square displacement and simulated self diffusion coefficient precisely define the balance of the thermodynamic and hydrodynamic interactions. Radial distribution function also reflected the density variation, strength and mobility of the nanocomposites. It is expected that CNT functionalization could improve the dispersion within n-alkane matrix. This would further ameliorate the mass and thermal properties of the composite. Additionally, the determined density was in good agreement with experimental data. Thus, molecular dynamics can be utilized as a high throughput technique for theoretical investigation of nanocomposites PCMs. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
Breast cancer is one of the leading cause of cancer related deaths in women and early detection is crucial for reducing mortality rates. In this paper, we present a novel and fully automated approach based on tissue transition analysis for lesion detection in breast ultrasound images. Every candidate pixel is classified as belonging to the lesion boundary, lesion interior or normal tissue based on its descriptor value. The tissue transitions are modeled using a Markov chain to estimate the likelihood of a candidate lesion region. Experimental evaluation on a clinical dataset of 135 images show that the proposed approach can achieve high sensitivity (95 %) with modest (3) false positives per image. The approach achieves very similar results (94 % for 3 false positives) on a completely different clinical dataset of 159 images without retraining, highlighting the robustness of the approach.
Resumo:
Clustering techniques which can handle incomplete data have become increasingly important due to varied applications in marketing research, medical diagnosis and survey data analysis. Existing techniques cope up with missing values either by using data modification/imputation or by partial distance computation, often unreliable depending on the number of features available. In this paper, we propose a novel approach for clustering data with missing values, which performs the task by Symmetric Non-Negative Matrix Factorization (SNMF) of a complete pair-wise similarity matrix, computed from the given incomplete data. To accomplish this, we define a novel similarity measure based on Average Overlap similarity metric which can effectively handle missing values without modification of data. Further, the similarity measure is more reliable than partial distances and inherently possesses the properties required to perform SNMF. The experimental evaluation on real world datasets demonstrates that the proposed approach is efficient, scalable and shows significantly better performance compared to the existing techniques.
Resumo:
Affine transformations have proven to be very powerful for loop restructuring due to their ability to model a very wide range of transformations. A single multi-dimensional affine function can represent a long and complex sequence of simpler transformations. Existing affine transformation frameworks like the Pluto algorithm, that include a cost function for modern multicore architectures where coarse-grained parallelism and locality are crucial, consider only a sub-space of transformations to avoid a combinatorial explosion in finding the transformations. The ensuing practical tradeoffs lead to the exclusion of certain useful transformations, in particular, transformation compositions involving loop reversals and loop skewing by negative factors. In this paper, we propose an approach to address this limitation by modeling a much larger space of affine transformations in conjunction with the Pluto algorithm's cost function. We perform an experimental evaluation of both, the effect on compilation time, and performance of generated codes. The evaluation shows that our new framework, Pluto+, provides no degradation in performance in any of the Polybench benchmarks. For Lattice Boltzmann Method (LBM) codes with periodic boundary conditions, it provides a mean speedup of 1.33x over Pluto. We also show that Pluto+ does not increase compile times significantly. Experimental results on Polybench show that Pluto+ increases overall polyhedral source-to-source optimization time only by 15%. In cases where it improves execution time significantly, it increased polyhedral optimization time only by 2.04x.
Resumo:
We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.
Resumo:
The central theme of this thesis is the use of imidazolium-based organic structure directing agents (OSDAs) in microporous materials synthesis. Imidazoliums are advantageous OSDAs as they are relatively inexpensive and simple to prepare, show robust stability under microporous material synthesis conditions, have led to a wide range of products, and have many permutations in structure that can be explored. The work I present involves the use of mono-, di-, and triquaternary imidazolium-based OSDAs in a wide variety of microporous material syntheses. Much of this work was motivated by successful computational predictions (Chapter 2) that led me to continue to explore these types of OSDAs. Some of the important discoveries with these OSDAs include the following: 1) Experimental evaluation and confirmation of a computational method that predicted a new OSDA for pure-silica STW, a desired framework containing helical pores that was previously very difficult to synthesize. 2) Discovery of a number of new imidazolium OSDAs to synthesize zeolite RTH, a zeolite desired for both the methanol-to-olefins reaction as well as NOX reduction in exhaust gases. This discovery enables the use of RTH for many additional investigations as the previous OSDA used to make this framework was difficult to synthesize, such that no large scale preparations would be practical. 3) The synthesis of pure-silica RTH by topotactic condensation from a layered precursor (denoted CIT-10), that can also be pillared to make a new framework material with an expanded pore system, denoted CIT-11, that can be calcined to form a new microporous material, denoted CIT-12. CIT-10 is also interesting since it is the first layered material to contain 8 membered rings through the layers, making it potentially useful in separations if delamination methods can be developed. 4) The synthesis of a new microporous material, denoted CIT-7 (framework code CSV) that contains a 2-dimensional system of 8 and 10 membered rings with a large cage at channel intersections. This material is especially important since it can be synthesized as a pure-silica framework under low-water, fluoride-mediated synthesis conditions, and as an aluminosilicate material under hydroxide mediated conditions. 5) The synthesis of high-silica heulandite (HEU) by topotactic condensation as well as direct synthesis, demonstrating new, more hydrothermally stable compositions of a previously known framework. 6) The synthesis of germanosilicate and aluminophosphate LTA using a triquaternary OSDA. All of these materials show the diverse range of products that can be formed from OSDAs that can be prepared by straightforward syntheses and have made many of these materials accessible for the first time under facile zeolite synthesis conditions.
Resumo:
This paper deals with the experimental evaluation of a flow analysis system based on the integration between an under-resolved Navier-Stokes simulation and experimental measurements with the mechanism of feedback (referred to as Measurement-Integrated simulation), applied to the case of a planar turbulent co-flowing jet. The experiments are performed with inner-to-outer-jet velocity ratio around 2 and the Reynolds number based on the inner-jet heights about 10000. The measurement system is a high-speed PIV, which provides time-resolved data of the flow-field, on a field of view which extends to 20 jet heights downstream the jet outlet. The experimental data can thus be used both for providing the feedback data for the simulations and for validation of the MI-simulations over a wide region. The effect of reduced data-rate and spatial extent of the feedback (i.e. measurements are not available at each simulation time-step or discretization point) was investigated. At first simulations were run with full information in order to obtain an upper limit of the MI-simulations performance. The results show the potential of this methodology of reproducing first and second order statistics of the turbulent flow with good accuracy. Then, to deal with the reduced data different feedback strategies were tested. It was found that for small data-rate reduction the results are basically equivalent to the case of full-information feedback but as the feedback data-rate is reduced further the error increases and tend to be localized in regions of high turbulent activity. Moreover, it is found that the spatial distribution of the error looks qualitatively different for different feedback strategies. Feedback gain distributions calculated by optimal control theory are presented and proposed as a mean to make it possible to perform MI-simulations based on localized measurements only. So far, we have not been able to low error between measurements and simulations by using these gain distributions.
Resumo:
本文主要运用稳定加液-反应系统对海水中方解石和文石形成时稀土元素的共沉淀现象进行了分析,研究了稀土元素在固-液体系中的迁移、转化和分配。进而在对其定量描述的前提下,研究了稀土元素共沉淀对各种反应条件的响应,并对共沉淀行为的机制进行了探讨。 本实验首先运用pH测试、高精度滴定分析等手段测定了实验中的一些基本参数,如[H+]、碱度和[Ca2+],根据计算结果获得了各碳酸体系要素,并以此为基础建立了5℃、15℃和25℃及pCO2=0.003atm下海水中方解石或文石的沉淀动力学方程。实验结果表明: 1)在各条件下,方解石或文石的沉淀速率(R)和其在海水中过饱和度(Ω)存在很好的线性相关性,即海相碳酸盐的沉淀动力学方程可以通过下面的基本表达式来表示:LogR=k*Log(Ω-1)+b ; 2)过高的稀土元素浓度会对文石或方解石的沉淀产生抑制作用,进而对共沉淀过程中YREEs的分异和分馏产生一定的影响。相比方解石而言,文石的沉淀动力学过程承受稀土元素的干扰能力更强; 3)不同温度下得到的方解石或文石各自的沉淀动力学方程存在明显的差异,表明这一过程受热力学因素控制。相对于方解石而言,温度对文石的沉淀动力学的影响更为显著。 与前人研究不同的是,本实验中YREEs的浓度设定在非常低的范围内,从而避免了过高浓度YREEs对方解石或文石沉淀动力学过程的干扰。在最终的反应液中,各种实验条件非常接近自然环境。有关稀土元素的共沉淀行为主要得出以下定性或定量化结论: 1)YREEs在随方解石或文石的共沉淀过程中,均发生了强烈的分异作用。在方解石实验中,稀土元素的分异系数分布曲线呈凸状分布;而在文石实验中,稀土元素的分异系数随原子序数的增加逐渐减小,遵循镧系收缩的规律。总的来说,稀土元素,尤其轻稀土元素在文石中的分异作用要强于方解石。 2)无论是方解石还是文石,沉淀速率对YREEs的分异作用都有着明显的影响。在方解石中,YREEs的分异系数随沉淀速率的增加呈一致性递减趋势;而在文石中,其分异系数对文石沉淀速率有着截然不同的响应:轻稀土元素(La, Ce, Nd, Sm, Eu, Gd)的分异系数随文石沉淀速率的增加而下降,而重稀土元素(Ho, Y, Tm, Yb , Lu)的分异系数则随文石沉淀速率的增加呈上升趋势。 3)在方解石中YREEs的分异系数之间存在非常好的相互关系,表明这些元素是以成比例的方式参与共沉淀。整个谱系呈现中等强度的分馏,MREE相对于LREE和HREE要更为富集;在文石中由于沉淀速率的作用不同,只有Y、Ho、Yb、Lu等元素的分异系数之间有较好的相互关系。YREEs出现了差异性的强烈分馏,在新生成沉淀中轻稀土元素相对于重稀土元素强烈富集。 4)YREEs在溶液中和碳酸盐晶体表面的碳酸根配位形式对YREEs在共沉淀过程中的分异作用极为重要,YREEs在碳酸盐晶体表面的吸附是整个谱系发生分馏效应的关键环节。对于文石来讲,晶体中有效YREE离子和Ca离子半价大小之间的相近程度是其分馏效应的关键因素;而对于方解石来说,YREEs在方解石晶格中的安置就是其分馏效应的关键控制因子,但在晶格安置中起到关键作用的是YREEs和方解石中O原子之间离子键M-O的键长,而非离子半径。 5)综合YREEs在方解石中的分异作用和分馏效应,我们认为M2(CO3)3-CaCO3和MNa(CO3)2-CaCO3是最为可能的两种固体溶液形成模式。 最为重要的是,对比我们的实验结果与前人在灰岩、叠层石、微生物成因碳酸盐等方解石质载体中的研究成果,两者之间出现了非常好的一致性。我们认为方解石质载体将是重建古海水中稀土元素相关信息的重要工具。相比之下,文石质载体不适合作为类似的载体。
Resumo:
This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.
Resumo:
T. G. Williams, J.J. Rowland, and Lee M.H., Teaching from Examples in Assembly and Manipulation of Snack Food Ingredients by Robot, Proc. IEEE/RSJ Int. Conf. on Robots and Systems (IROS 2001), Nov., 2001, pp2300-2305.
Resumo:
T. Boongoen and Q. Shen. 'Detecting False Identity through Behavioural Patterns', In Proceedings of International Crime Science Conference, British Library, London UK, 2008. Publisher's online version forthcoming.;The full text is currently unavailable in CADAIR pending approval by the publisher. Sponsorship: UK EPSRC grant EP/D057086
Resumo:
In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.
Resumo:
An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
The data streaming model provides an attractive framework for one-pass summarization of massive data sets at a single observation point. However, in an environment where multiple data streams arrive at a set of distributed observation points, sketches must be computed remotely and then must be aggregated through a hierarchy before queries may be conducted. As a result, many sketch-based methods for the single stream case do not apply directly, as either the error introduced becomes large, or because the methods assume that the streams are non-overlapping. These limitations hinder the application of these techniques to practical problems in network traffic monitoring and aggregation in sensor networks. To address this, we develop a general framework for evaluating and enabling robust computation of duplicate-sensitive aggregate functions (e.g., SUM and QUANTILE), over data produced by distributed sources. We instantiate our approach by augmenting the Count-Min and Quantile-Digest sketches to apply in this distributed setting, and analyze their performance. We conclude with experimental evaluation to validate our analysis.