980 resultados para size accuracy
Resumo:
The preceding two editions of CoDaWork included talks on the possible consideration of densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended the Euclidean structure of the simplex to a Hilbert space structure of the set of densities within a bounded interval, and van den Boogaart (2005) generalized this to the set of densities bounded by an arbitrary reference density. From the many variations of the Hilbert structures available, we work with three cases. For bounded variables, a basis derived from Legendre polynomials is used. For variables with a lower bound, we standardize them with respect to an exponential distribution and express their densities as coordinates in a basis derived from Laguerre polynomials. Finally, for unbounded variables, a normal distribution is used as reference, and coordinates are obtained with respect to a Hermite-polynomials-based basis. To get the coordinates, several approaches can be considered. A numerical accuracy problem occurs if one estimates the coordinates directly by using discretized scalar products. Thus we propose to use a weighted linear regression approach, where all k- order polynomials are used as predictand variables and weights are proportional to the reference density. Finally, for the case of 2-order Hermite polinomials (normal reference) and 1-order Laguerre polinomials (exponential), one can also derive the coordinates from their relationships to the classical mean and variance. Apart of these theoretical issues, this contribution focuses on the application of this theory to two main problems in sedimentary geology: the comparison of several grain size distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock or sediment, like their composition
Resumo:
With the rapid development in technology over recent years, construction, in common with many areas of industry, has become increasingly complex. It would, therefore, seem to be important to develop and extend the understanding of complexity so that industry in general and in this case the construction industry can work with greater accuracy and efficiency to provide clients with a better service. This paper aims to generate a definition of complexity and a method for its measurement in order to assess its influence upon the accuracy of the quantity surveying profession in UK new build office construction. Quantitative data came from an analysis of twenty projects of varying size and value and qualitative data came from interviews with professional quantity surveyors. The findings highlight the difficulty in defining and measuring project complexity. The correlation between accuracy and complexity was not straightforward, being subjected to many extraneous variables, particularly the impact of project size. Further research is required to develop a better measure of complexity. This is in order to improve the response of quantity surveyors, so that an appropriate level of effort can be applied to individual projects, permitting greater accuracy and enabling better resource planning within the profession.
Resumo:
1. Suspension feeding by caseless caddisfly larvae (Trichoptera) constitutes a major pathway for energy flow, and strongly influences productivity, in streams and rivers. 2. Consideration of the impact of these animals on lotic ecosystems has been strongly influenced by a single study investigating the efficiency of particle capture of nets built by one species of hydropsychid caddisfly. 3. Using water sampling techniques at appropriate spatial scales, and taking greater consideration of local hydrodynamics than previously, we examined the size-frequency distribution of particles captured by the nets of Hydropsyche siltalai. Our results confirm that capture nets are selective in terms of particle size, and in addition suggest that this selectivity is for particles likely to provide the most energy. 4. By incorporating estimates of flow diversion around the nets of caseless caddisfly larvae, we show that capture efficiency (CE) is considerably higher than previously estimated, and conclude that more consideration of local hydrodynamics is needed to evaluate the efficiency of particle capture. 5. We use our results to postulate a mechanistic explanation for a recent example of interspecific facilitation, whereby a reduction of near-bed velocities seen in single species monocultures leads to increased capture rates and local depletion of seston within the region of reduced velocity.
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.
Resumo:
Expressions for the viscosity correction function, and hence bulk complex impedance, density, compressibility, and propagation constant, are obtained for a rigid frame porous medium whose pores are prismatic with fixed cross-sectional shape, but of variable pore size distribution. The lowand high-frequency behavior of the viscosity correction function is derived for the particular case of a log-normal pore size distribution, in terms of coefficients which can, in general, be computed numerically, and are given here explicitly for the particular cases of pores of equilateral triangular, circular, and slitlike cross-section. Simple approximate formulae, based on two-point Pade´ approximants for the viscosity correction function are obtained, which avoid a requirement for numerical integration or evaluation of special functions, and their accuracy is illustrated and investigated for the three pore shapes already mentioned
Resumo:
This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Scintimammography using Tc-99m-sestamibi is a noninvasive and painless diagnostic imaging method that is used to detect breast cancer when mammography is inconclusive Because of the advantages of labeling v '7,ith Tc-99m-sestamibi and its high efficiency in detecting carcinomas, it is the most widespread agent for this purpose Its accumulation in the tumor has multifactorial causes and does not depend on the presence of architectural distortion or local or diffuse density variation in the breast The objective of tfiis study was to evaluate the accuracy of scintimammography 1 for detecting breast cancer One hundred and fifty-seven patients presenting 158 palpable and non-palpable breast nodules were evaluated Three patients were male and 154 were female, aged between 14 and 81 years All patients underwent scintimammography, and the nodule was subjected i to cytological or histological study, i e, the gold standard for diagnosing cancer One hundred and eleven malignant and 47 benign nodules were detected, with predominance of ductal carcinomas (n=94) and fibroadenoma/fibrocysiic condition (n=11/n=11), respectively The mean size was 3 11 cm (7-10 cm) among the malignant nodules and 2 07 cm among the benign nodules (0 5-10 cm) The sensitivity, specificity, positive predictive value, negative predictive value and accuracy were 89, 89, 95, 78 and 89%, respectively Analysis on the histological types showed that the technique was more effective on tumors that were more aggressive, such as ductal carcinoma In this study, Tc-99m-sestamibi scintim immography was shown to be an important tool for diagnosing breast cancer when mammography was inconclusive
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A dispersão da amostra de solo é uma etapa fundamental da análise granulométrica, sendo realizada mediante o uso de dispersantes químicos e agitação mecânica. O objetivo deste trabalho foi avaliar a eficiência de mesa agitadora reciprocante de baixa rotação na dispersão mecânica de amostras de solos de diferentes classes texturais. Foram realizadas análises granulométricas em 61 amostras com quatro repetições, empregando o método da pipeta para determinação da fração argila e tamisagem para determinação das frações areia grossa, areia fina e areia total, sendo o silte determinado por diferença. Na avaliação de desempenho, os resultados obtidos com uso da mesa agitadora reciprocante (MAR) foram comparados com dados disponíveis para as mesmas amostras oriundos de relatórios do Ensaio de Proficiência IAC para Laboratórios de Análises de Solos - Prolab/IAC. Análises de acurácia foram realizadas com base nos valores dos intervalos de confiança definidos para cada fração granulométrica componente de cada amostra ensaiada. Indicadores gráficos também foram utilizados na comparação de dados, por meio de dispersão e ajuste linear. A estatística descritiva indicou preponderância de baixa variabilidade em mais de 90 % dos resultados obtidos para as amostras de texturas arenosa, média e argilosa e em 68 % dos obtidos para as amostras de textura muito argilosa, indicando boa repetibilidade dos resultados obtidos com a MAR. Média variabilidade foi mais frequentemente associada à fração silte, seguida da fração areia fina. Os resultados das análises de sensibilidade indicam acurácia de 100 % nas três frações granulométricas - areia total, silte e argila - para todas as amostras analisadas pertencentes às classes texturais muito argilosa, argilosa e média. Para as nove amostras de textura arenosa, a acurácia média foi de 85,2 %, e os maiores desvios ocorreram em relação à fração silte. Nas aproximações lineares, coeficientes de correlação igual (silte) ou superiores (areia total e argila) a 0,93, bem como diferenças menores do que 0,16 entre os coeficientes angulares das retas e o valor unitário, indicam alta correlação entre os resultados de referência (Prolab/IAC) e os obtidos nos ensaios com a MAR. Conclui-se pelo desempenho satisfatório da mesa agitadora reciprocante de baixa rotação para dispersão mecânica de amostras de solo de diferentes classes texturais para fins de análise granulométrica, permitindo recomendar o uso alternativo do equipamento quando se emprega agitação lenta. As vantagens do uso do equipamento nacional incluem o baixo custo, a possibilidade de análise simultânea de grande número de amostras e o uso de frascos comuns, de vidro ou de plástico, baratos e de fácil reposição.
Resumo:
O uso dos tamanhos de amostras adequados nas unidades experimentais melhora a eficiência da pesquisa. Foi conduzido um experimento no ano agrícola 2004/2005 em Santa Maria, Rio Grande do Sul, com o objetivo de estimar o tamanho de amostra para o comprimento de espiga, o diâmetro de espiga e de sabugo, o peso da espiga, dos grãos por espiga, do sabugo e de 100 grãos, o número de fileiras de grãos por espiga, o número de grãos por espiga e o comprimento dos grãos de dois híbridos simples (P30F33 e P Flex), dois híbridos triplos (AG8021 e DG501) e dois híbridos duplos (AG2060 e DKB701) de milho. Para uma precisão de 5% (D5), características de peso (peso de espiga despalhada, de grãos, de sabugo e de 100 grãos) podem ser amostradas com 21 espigas, características de tamanho (comprimento de espiga e de grão, diâmetro de espiga e de sabugo) com oito espigas, e dados de contagem (número de grãos e de fileiras) com 13 espigas. O tamanho de amostra é variável em função da característica da espiga e do tipo de híbrido: simples, triplo ou duplo. A variabilidade genética existente entre os híbridos de milho, na forma crescente: simples, triplo e duplo, não reflete na mesma ordem no tamanho de amostra de caracteres da espiga.
Resumo:
Satellite remote sensing of ocean colour is the only method currently available for synoptically measuring wide-area properties of ocean ecosystems, such as phytoplankton chlorophyll biomass. Recently, a variety of bio-optical and ecological methods have been established that use satellite data to identify and differentiate between either phytoplankton functional types (PFTs) or phytoplankton size classes (PSCs). In this study, several of these techniques were evaluated against in situ observations to determine their ability to detect dominant phytoplankton size classes (micro-, nano- and picoplankton). The techniques are applied to a 10-year ocean-colour data series from the SeaWiFS satellite sensor and compared with in situ data (6504 samples) from a variety of locations in the global ocean. Results show that spectral-response, ecological and abundance-based approaches can all perform with similar accuracy. Detection of microplankton and picoplankton were generally better than detection of nanoplankton. Abundance-based approaches were shown to provide better spatial retrieval of PSCs. Individual model performance varied according to PSC, input satellite data sources and in situ validation data types. Uncertainty in the comparison procedure and data sources was considered. Improved availability of in situ observations would aid ongoing research in this field. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A rapid and simple method was developed for quantitation of polar compounds in fats and oils using monostearin as internal standard. Starting from 50 mg of oil sample, polar compounds were obtained by solid-phase extraction (silica cartridges) and subsequently separated by high-performance size-exclusion chromatography into triglyceride polymers, triglyceride dimers, oxidized triglyceride monomers, diglycerides, internal standard and fatty acids. Quantitation of total polar compounds was achieved through the internal standard method and then amounts of each group of compounds could be calculated. A pool of polar compounds was used to check linearity, precision and accuracy of the method, as well as the solid-phase extraction recovery. The procedure was applied to samples with different content of polar compounds and good quantitative results were obtained, especially for samples of low alteration level.
Resumo:
This paper presents a mixed-integer linear programming model to solve the conductor size selection and reconductoring problem in radial distribution systems. In the proposed model, the steady-state operation of the radial distribution system is modeled through linear expressions. The use of a mixed-integer linear model guarantees convergence to optimality using existing optimization software. The proposed model and a heuristic are used to obtain the Pareto front of the conductor size selection and reconductoring problem considering two different objective functions. The results of one test system and two real distribution systems are presented in order to show the accuracy as well as the efficiency of the proposed solution technique. © 1969-2012 IEEE.