82 resultados para Dynamic Calibration
Resumo:
In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. We have developed an inherently incremental technique (incBoard) that maintains a coherent disposition of elements from a dynamic multidimensional data set on a 2D grid as the set changes. Here, we introduce a novel layout that uses pairwise similarity from grid neighbors, as defined in incBoard, to reposition elements on the visual space, free from constraints imposed by the grid. The board continues to be updated and can be displayed alongside the new space. As similar items are placed together, while dissimilar neighbors are moved apart, it supports users in the identification of clusters and subsets of related elements. Densely populated areas identified in the incSpace can be efficiently explored with the corresponding incBoard visualization, which is not susceptible to occlusion. The solution remains inherently incremental and maintains a coherent disposition of elements, even for fully renewed sets. The algorithm considers relative positions for the initial placement of elements, and raw dissimilarity to fine tune the visualization. It has low computational cost, with complexity depending only on the size of the currently viewed subset, V. Thus, a data set of size N can be sequentially displayed in O(N) time, reaching O(N (2)) only if the complete set is simultaneously displayed.
Resumo:
This paper presents a new technique and two algorithms to bulk-load data into multi-way dynamic metric access methods, based on the covering radius of representative elements employed to organize data in hierarchical data structures. The proposed algorithms are sample-based, and they always build a valid and height-balanced tree. We compare the proposed algorithm with existing ones, showing the behavior to bulk-load data into the Slim-tree metric access method. After having identified the worst case of our first algorithm, we describe adequate counteractions in an elegant way creating the second algorithm. Experiments performed to evaluate their performance show that our bulk-loading methods build trees faster than the sequential insertion method regarding construction time, and that it also significantly improves search performance. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A finite difference technique, based on a projection method, is developed for solving the dynamic three-dimensional Ericksen-Leslie equations for nematic liquid crystals subject to a strong magnetic field. The governing equations in this situation are derived using primitive variables and are solved using the ideas behind the GENSMAC methodology (Tome and McKee [32]; Tome et al. [34]). The resulting numerical technique is then validated by comparing the numerical solution against an analytic solution for steady three-dimensional flow between two-parallel plates subject to a strong magnetic field. The validated code is then employed to solve channel flow for which there is no analytic solution. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The main objective of this paper is to discuss maximum likelihood inference for the comparative structural calibration model (Barnett, in Biometrics 25:129-142, 1969), which is frequently used in the problem of assessing the relative calibrations and relative accuracies of a set of p instruments, each designed to measure the same characteristic on a common group of n experimental units. We consider asymptotic tests to answer the outlined questions. The methodology is applied to a real data set and a small simulation study is presented.
Resumo:
Low-density lipoprotein (LDL) particles are the major cholesterol-carrying lipoprotein in the human circulation from the liver to peripheral tissues. High levels of LDL-Cholesterol (LDL-C) are known risk factor for the development of coronary artery disease (CAD). The most common approach to determine the LDLC in the clinical laboratory involves the Friedewald formula. However, in certain situations, this approach is inadequate. In this paper we report on the enhancement on the Europium emission band of Europium chlortetracycline complex (CTEu) in the presence of LDL. The emission intensity at 615 nm of the CTEu increases with increasing amounts of LDL. This phenomenon allowed us to propose a method to determine the LDL concentration in a sample composed by an aqueous solution of LDL. With this result we obtained LDL calibration curve, LOD (limit of detection) of 0.49 mg/mL and SD (standard deviation) of 0.003. We observed that CTEu complex provides a wider dynamic concentration-range for LDL determination than that from Eu-tetracycline previously. The averaged emission lifetimes of the CTEu and CTEu with LDL (1.5 mg/mL) complexes were measured as 15 and 46 Its, respectively. Study with some metallic interferents is presented. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The traditional reduction methods to represent the fusion cross sections of different systems are flawed when attempting to completely eliminate the geometrical aspects, such as the heights and radii of the barriers, and the static effects associated with the excess neutrons or protons in weakly bound nuclei. We remedy this by introducing a new dimensionless universal function, which allows the separation and disentanglement of the static and dynamic aspects of the breakup coupling effects connected with the excess nucleons. Applying this new reduction procedure to fusion data of several weakly bound systems, we find a systematic suppression of complete fusion above the Coulomb barrier and enhancement below it. Different behaviors are found for the total fusion cross sections. They are appreciably suppressed in collisions of neutron-halo nuclei, while they are practically not affected by the breakup coupling in cases of stable weakly bound nuclei. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A new technique to analyze fusion data is developed. From experimental cross sections and results of coupled-channel calculations a dimensionless function is constructed. In collisions of strongly bound nuclei this quantity is very close to a universal function of a variable related to the collision energy, whereas for weakly bound projectiles the effects of breakup coupling are measured by the deviations with respect to this universal function. This technique is applied to collisions of stable and unstable weakly bound isotopes.
Resumo:
A structure-dynamic approach to cortical systems is reported which is based on the number of paths and the accessibility of each node. The latter measurement is obtained by performing self-avoiding random walks in the respective networks, so as to simulate dynamics, and then calculating the entropies of the transition probabilities for walks starting from each node. Cortical networks of three species, namely cat, macaque and humans, are studied considering structural and dynamical aspects. It is verified that the human cortical network presents the highest accessibility and number of paths (in terms of z-scores). The correlation between the number of paths and accessibility is also investigated as a mean to quantify the level of independence between paths connecting pairs of nodes in cortical networks. By comparing the cortical networks of cat, macaque and humans, it is verified that the human cortical network tends to present the largest number of independent paths of length larger than four. These results suggest that the human cortical network is potentially the most resilient to brain injures. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We present a minor but essential modification to the CODEX 1D-MAS exchange experiment. The new CONTRA method, which requires minor changes of the original sequence only, has advantages over the previously introduced S-CODEX, since it is less sensitive to artefacts caused by finite pulse lengths. The performance of this variant, including the finite pulse effect, was confirmed by SIMPSON calculations and demonstrated on a number of dynamic systems. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Dynamic Time Warping (DTW), a pattern matching technique traditionally used for restricted vocabulary speech recognition, is based on a temporal alignment of the input signal with the template models. The principal drawback of DTW is its high computational cost as the lengths of the signals increase. This paper shows extended results over our previously published conference paper, which introduces an optimized version of the DTW I hat is based on the Discrete Wavelet Transform (DWT). (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Corumba Group cropping out in the southern Paraguay Belt in Brazil is one of the most complete Ediacaran sedimentary archives of palaeogeographic climatic biogeochemical and biotic evolution in southwestern Gondwana The unit hosts a rich fossil record including acritarchs vendotaenids (Vendo taenia Eoholynia) soft-bodied metazoans (Corumbella) and skeletal fossils (Cloudina Titanotheca) The Tamengo Formation made up mainly of limestones and marls provides a rich bio- and chemostratigraphic record Several outcrops formerly assigned to the Cuiaba Group are here included in the Tamengo Formation on the basis of lithological and chemostratigraphical criteria High-resolution carbon isotopic analyses are reported for the Tamengo Formation showing (from base to top) (1) a positive delta(13)C excursion to +4 parts per thousand PDB above post-glacial negative values (2) a negative excursion to -3 5 parts per thousand associated with a marked regression and subsequent transgression (3) a positive excursion to +5 5 parts per thousand and (4) a plateau characterized by delta(13)C around +3 parts per thousand A U-Pb SHRIMP zircon age of an ash bed Interbedded in the upper part of the delta(13)C positive plateau yielded 543 +/- 3 Ma which is considered as the depositional age (Babinski et al 2008a) The positive plateau in the upper Tamengo Formation and the preceding positive excursion are ubiquitous features in several successions worldwide including the Nama Group (Namibia) the Dengying Formation (South China) and the Nafun and Ara groups (Oman) This plateau is constrained between 542 and 551 Ma thus consistent with the age of the upper Tamengo Formation The negative excursion of the lower Tamengo Formation may be correlated to the Shuram-Wonoka negative anomaly although delta(13)C values do not fall beyond -3 5 parts per thousand in the Brazilian sections Sedimentary breccias occur just beneath this negative excursion in the lower Tamengo Formation One possible interpretation of the origin of these breccias is a glacioeustatic sea-level fall but a tectonic interpretation cannot be completely ruled out Published by Elsevier B V
Resumo:
The Patino Formation sandstones, which crop out in Aregua neighborhood in Eastern Paraguay and show columnar joints near the contact zone with a nephelinite dyke, have as their main characteristics the high proportion of syntaxial quartz overgrowth and a porosity originated from different processes, initially by dissolution and later by partial filling and fracturing. Features like the presence of floating grains in the syntaxial cement, the transitional interpenetrative contact between the silica-rich cement and grains as well as the intense fracture porosity are strong indications that the cement has been formed by dissolution and reprecipitation of quartz from the framework under the effect of thermal expansion followed by rapid contraction. The increase of the silica-rich cement towards the dyke in association with the orthogonal disposition of the columns relative to dyke walls are indicative that the igneous body may represent the main heat source for the interstitial aqueous solutions previously existing in the sediments. At macroscopic scale, the increasing of internal tensions in the sandstones is responsible for the nucleation of polygons, leading to the individualization of prisms, which are interconnected by a system of joints, formed firstly on isotherm surfaces of low temperature and later on successive adjacent planes towards the dyke heat source.
Resumo:
We investigate several two-dimensional guillotine cutting stock problems and their variants in which orthogonal rotations are allowed. We first present two dynamic programming based algorithms for the Rectangular Knapsack (RK) problem and its variants in which the patterns must be staged. The first algorithm solves the recurrence formula proposed by Beasley; the second algorithm - for staged patterns - also uses a recurrence formula. We show that if the items are not so small compared to the dimensions of the bin, then these algorithms require polynomial time. Using these algorithms we solved all instances of the RK problem found at the OR-LIBRARY, including one for which no optimal solution was known. We also consider the Two-dimensional Cutting Stock problem. We present a column generation based algorithm for this problem that uses the first algorithm above mentioned to generate the columns. We propose two strategies to tackle the residual instances. We also investigate a variant of this problem where the bins have different sizes. At last, we study the Two-dimensional Strip Packing problem. We also present a column generation based algorithm for this problem that uses the second algorithm above mentioned where staged patterns are imposed. In this case we solve instances for two-, three- and four-staged patterns. We report on some computational experiments with the various algorithms we propose in this paper. The results indicate that these algorithms seem to be suitable for solving real-world instances. We give a detailed description (a pseudo-code) of all the algorithms presented here, so that the reader may easily implement these algorithms. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.