999 resultados para Set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bulk Ge15Te85-xIn5Agx glasses are shown to exhibit electrical switching with switching/threshold voltages in the range of 70-120V for a sample thickness of 0.3 mm. Further, the samples exhibit threshold or memory behavior depending on the ON state current. The compositional studies confirm the presence of an intermediate phase in the range 8 <= x <= 16, revealed earlier by thermal studies. Further, SET-RESET studies have been performed by these glasses using a triangular pulse of 6 mA amplitude (for SET) and 21 mA amplitude (for RESET). Raman studies of the samples after the SET and RESET operations reveal that the SET state is a crystalline phase which is obtained by thermal annealing and the RESET state is the glassy state, similar to the as-quenched samples. It is interesting to note that the samples in the intermediate phase, especially compositions at x = 10, 12, and 14 withstand more set-reset cycles. This indicates compositions in the intermediate phase are better suited for phase change memory applications. (C) 2014 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution), which requires only three inputs, namely the solid metal concentration, saturation concentration of the dissolved metal ions and diffusion coefficient. A combined eXtended Finite Element Model (XFEM) and level set method is developed in this paper. The extended finite element model handles the jump discontinuity in the metal concentrations at the interface, by using discontinuous-derivative enrichment formulation for concentration discontinuity at the interface. This eliminates the requirement of using front conforming mesh and re-meshing after each time step as in conventional finite element method. A numerical technique known as level set method tracks the position of the moving interface and updates it over time. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed method is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution). The interface experiences a jump discontinuity in metal concentration. The extended finite-element model (XFEM) handles this jump discontinuity by using discontinuous-derivative enrichment formulation, eliminating the requirement of using front conforming mesh and re-meshing after each time step as in the conventional finite-element method. However, prior interface location is required so as to solve the governing equations for concentration field for which a numerical technique, the level set method, is used for tracking the interface explicitly and updating it over time. The level set method is chosen as it is independent of shape and location of the interface. Thus, a combined XFEM and level set method is developed in this paper. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed model is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions. An empirical model for pitting potential is also derived based on the finite-element results. Studies show that pitting profile depends on factors such as ion concentration, solution pH and temperature to a large extent. Studying the individual and combined effects of these factors on pitting potential is worth knowing, as pitting potential directly influences corrosion rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tetrablock, roughly speaking, is the set of all linear fractional maps that map the open unit disc to itself. A formal definition of this inhomogeneous domain is given below. This paper considers triples of commuting bounded operators (A,B,P) that have the tetrablock as a spectral set. Such a triple is named a tetrablock contraction. The motivation comes from the success of model theory in another inhomogeneous domain, namely, the symmetrized bidisc F. A pair of commuting bounded operators (S,P) with Gamma as a spectral set is called a Gamma-contraction, and always has a dilation. The two domains are related intricately as the Lemma 3.2 below shows. Given a triple (A, B, P) as above, we associate with it a pair (F-1, F-2), called its fundamental operators. We show that (A,B,P) dilates if the fundamental operators F-1 and F-2 satisfy certain commutativity conditions. Moreover, the dilation space is no bigger than the minimal isometric dilation space of the contraction P. Whether these commutativity conditions are necessary, too, is not known. what we have shown is that if there is a tetrablock isometric dilation on the minimal isometric dilation space of P. then those commutativity conditions necessarily get imposed on the fundamental operators. En route, we decipher the structure of a tetrablock unitary (this is the candidate as the dilation triple) and a tertrablock isometry (the restriction of a tetrablock unitary to a joint invariant sub-space). We derive new results about r-contractions and apply them to tetrablock contractions. The methods applied are motivated by 11]. Although the calculations are lengthy and more complicated, they beautifully reveal that the dilation depends on the mutual relationship of the two fundamental operators, so that certain conditions need to be satisfied. The question of whether all tetrablock contractions dilate or not is unresolved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detailed pedofacies characterization along-with lithofacies investigations of the Mio-Pleistocene Siwalik sediments exposed in the Ramnagar sub-basin have been studied so as to elucidate variability in time and space of fluvial processes and the role of intra- and extra-basinal controls on fluvial sedimentation during the evolution of the Himalayan foreland basin (HFB). Dominance of multiple, moderately to strongly developed palaeosol assemblages during deposition of Lower Siwalik (similar to 12-10.8 Ma) sediments suggest that the HFB was marked by Upland set-up of Thomas et al. (2002). Activity of intra-basinal faults on the uplands and deposition of terminal fans at different times caused the development of multiple soils. Further, detailed pedofacies along-with lithofacies studies indicate prevalence of stable tectonic conditions and development of meandering streams with broad floodplains. However, the Middle Siwalik (similar to 10.8-4.92 Ma) sub-group is marked by multistoried sandstones and minor mudstone and mainly weakly developed palaeosols, indicating deposition by large braided rivers in the form of megafans in a Lowland set-up of Thomas et al. (2002). Significant change in nature and size of rivers from the Lower to Middle Siwalik at similar to 10 Ma is found almost throughout of the basin from Kohat Plateau (Pakistan) to Nepal because the Himalayan orogeny witnessed its greatest tectonic upheaval at this time leading to attainment of great heights by the Himalaya, intensification of the monsoon, development of large rivers systems and a high rate of sedimentation, hereby a major change from the Upland set-up to the Lowland set-up over major parts of the HFB. An interesting geomorphic environmental set-up prevailed in the Ramnagar sub-basin during deposition of the studied Upper Siwalik (similar to 4.92 to <1.68 Ma) sediments as observed from the degree of pedogenesis and the type of palaeosols. In general, the Upper Siwalik sub-group in the Ramnagar sub-basin is subdivided from bottom to top into the Purmandal sandstone (4.92-4.49 Ma), Nagrota (4.49-1.68 Ma) and Boulder Conglomerate (<1.68 Ma) formations on the basis of sedimentological characters and change in dominant lithology. Presence of mudstone, a few thin gravel beds and dominant sandstone lithology with weakly to moderately developed palaeosols in the Purmandal sandstone Fm. indicates deposition by shallow braided fluvial streams. The deposition of mudstone dominant Nagrota Fm. with moderately to some well developed palaeosols and a zone of gleyed palaeosols with laminated mudstones and thin sandstones took place in an environment marked by numerous small lakes, water-logged regions and small streams in an environment just south of the Piedmont zone, perhaps similar to what is happening presently in the Upland region/the Upper Gangetic plain. This area is locally called the `Trai region' (Pascoe, 1964). Deposition of Boulder Conglomerate Fm. took place by gravelly braided river system close to the Himalayan Ranges. Activity along the Main Boundary Fault led to progradation of these environments distal-ward and led to development of on the whole a coarsening upward sequence. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been shown earlier1] that the relaxed force constants (RFCs) could be used as a measure of bond strength only when the bonds form a part of the complete valence internal coordinates (VIC) basis. However, if the bond is not a part of the complete VIC basis, its RFC is not necessarily a measure of bond strength. Sometimes, it is possible to have a complete VIC basis that does not contain the intramolecular hydrogen bond (IMHB) as part of the basis. This means the RFC of IMHB is not necessarily a measure of bond strength. However, we know that IMHB is a weak bond and hence its RFC has to be a measure of bond strength. We resolve this problem of IMHB not being part of the complete basis by postulating `equivalent' basis sets where IMHB is part of the basis at least in one of the equivalent sets of VIC. As long as a given IMHB appears in one of the equivalent complete VIC basis sets, its RFC could be used as a measure of bond strength parameter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ATLAS and CMS collaborations at the LHC have performed analyses on the existing data sets, studying the case of one vector-like fermion or multiplet coupling to the standard model Yukawa sector. In the near future, with more data available, these experimental collaborations will start to investigate more realistic cases. The presence of more than one extra vector-like multiplet is indeed a common situation in many extensions of the standard model. The interplay of these vector-like multiplet between precision electroweak bounds, flavour and collider phenomenology is a important question in view of establishing bounds or for the discovery of physics beyond the standard model. In this work we study the phenomenological consequences of the presence of two vector-like multiplets. We analyse the constraints on such scenarios from tree-level data and oblique corrections for the case of mixing to each of the SM generations. In the present work, we limit to scenarios with two top-like partners and no mixing in the down-sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Support vector machines (SVM) are a popular class of supervised models in machine learning. The associated compute intensive learning algorithm limits their use in real-time applications. This paper presents a fully scalable architecture of a coprocessor, which can compute multiple rows of the kernel matrix in parallel. Further, we propose an extended variant of the popular decomposition technique, sequential minimal optimization, which we call hybrid working set (HWS) algorithm, to effectively utilize the benefits of cached kernel columns and the parallel computational power of the coprocessor. The coprocessor is implemented on Xilinx Virtex 7 field-programmable gate array-based VC707 board and achieves a speedup of upto 25x for kernel computation over single threaded computation on Intel Core i5. An application speedup of upto 15x over software implementation of LIBSVM and speedup of upto 23x over SVMLight is achieved using the HWS algorithm in unison with the coprocessor. The reduction in the number of iterations and sensitivity of the optimization time to variation in cache size using the HWS algorithm are also shown.