977 resultados para approximate KNN query


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pace of development of new healthcare technologies and related knowledge is very fast. Implementation of high quality evidence-based knowledge is thus mandatory to warrant an effective healthcare system and patient safety. However, even though only a small fraction of the approximate 2500 scientific publication indexed daily in Medline is actually useful to clinical practice, the amountof the new information is much too large to allow busy healthcare professionals to stay aware of possibly important evidence-based information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many European countries, image quality for digital x-ray systems used in screening mammography is currently specified using a threshold-detail detectability method. This is a two-part study that proposes an alternative method based on calculated detectability for a model observer: the first part of the work presents a characterization of the systems. Eleven digital mammography systems were included in the study; four computed radiography (CR) systems, and a group of seven digital radiography (DR) detectors, composed of three amorphous selenium-based detectors, three caesium iodide scintillator systems and a silicon wafer-based photon counting system. The technical parameters assessed included the system response curve, detector uniformity error, pre-sampling modulation transfer function (MTF), normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Approximate quantum noise limited exposure range was examined using a separation of noise sources based upon standard deviation. Noise separation showed that electronic noise was the dominant noise at low detector air kerma for three systems; the remaining systems showed quantum noise limited behaviour between 12.5 and 380 µGy. Greater variation in detector MTF was found for the DR group compared to the CR systems; MTF at 5 mm(-1) varied from 0.08 to 0.23 for the CR detectors against a range of 0.16-0.64 for the DR units. The needle CR detector had a higher MTF, lower NNPS and higher DQE at 5 mm(-1) than the powder CR phosphors. DQE at 5 mm(-1) ranged from 0.02 to 0.20 for the CR systems, while DQE at 5 mm(-1) for the DR group ranged from 0.04 to 0.41, indicating higher DQE for the DR detectors and needle CR system than for the powder CR phosphor systems. The technical evaluation section of the study showed that the digital mammography systems were well set up and exhibiting typical performance for the detector technology employed in the respective systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to examine the relationship between skeletal muscle monocarboxylate transporters 1 and 4 (MCT1 and MCT4) expression, skeletal muscle oxidative capacity and endurance performance in trained cyclists. Ten well-trained cyclists (mean +/- SD; age 24.4 +/- 2.8 years, body mass 73.2 +/- 8.3 kg, VO(2max) 58 +/- 7 ml kg(-1) min(-1)) completed three endurance performance tasks [incremental exercise test to exhaustion, 2 and 10 min time trial (TT)]. In addition, a muscle biopsy sample from the vastus lateralis muscle was analysed for MCT1 and MCT4 expression levels together with the activity of citrate synthase (CS) and 3-hydroxyacyl-CoA dehydrogenase (HAD). There was a tendency for VO(2max) and peak power output obtained in the incremental exercise test to be correlated with MCT1 (r = -0.71 to -0.74; P < 0.06), but not MCT4. The average power output (P (average)) in the 2 min TT was significantly correlated with MCT4 (r = -0.74; P < 0.05) and HAD (r = -0.92; P < 0.01). The P (average) in the 10 min TT was only correlated with CS activity (r = 0.68; P < 0.05). These results indicate the relationship between MCT1 and MCT4 as well as cycle TT performance may be influenced by the length and intensity of the task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The linear approximate version of the AIDS model is estimated using data from the Lithuanian household budget survey covering the period from July 1992 to December 1994. Price and real expenditure elasticities for twelve food groups were estimated based on the estimated coefficients of the model. Very little or nothing is known about the demand parameters of Lithuania and other former socialist countries, so the results are of intrinsic interest. Estimated expenditure elasticities were positive and statistically significant for all food groups while all own-price elasticities were negative and statistically significant, except for that of eggs which was insignificant. Results suggest that Lithuanian household consumption did respond to price and real income changes during their transition to a market-oriented economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two different approaches currently prevail for predicting spatial patterns of species assemblages. The first approach (macroecological modelling, MEM) focuses directly on realised properties of species assemblages, whereas the second approach (stacked species distribution modelling, S-SDM) starts with constituent species to approximate assemblage properties. Here, we propose to unify the two approaches in a single 'spatially-explicit species assemblage modelling' (SESAM) framework. This framework uses relevant species source pool designations, macroecological factors, and ecological assembly rules to constrain predictions of the richness and composition of species assemblages obtained by stacking predictions of individual species distributions. We believe that such a framework could prove useful in many theoretical and applied disciplines of ecology and evolution, both for improving our basic understanding of species assembly across spatio-temporal scales and for anticipating expected consequences of local, regional or global environmental changes. In this paper, we propose such a framework and call for further developments and testing across a broad range of community types in a variety of environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genome-wide scans of genetic differentiation between hybridizing taxa can identify genome regions with unusual rates of introgression. Regions of high differentiation might represent barriers to gene flow, while regions of low differentiation might indicate adaptive introgression-the spread of selectively beneficial alleles between reproductively isolated genetic backgrounds. Here we conduct a scan for unusual patterns of differentiation in a mosaic hybrid zone between two mussel species, Mytilus edulis and M. galloprovincialis. One outlying locus, mac-1, showed a characteristic footprint of local introgression, with abnormally high frequency of edulis-derived alleles in a patch of M. galloprovincialis enclosed within the mosaic zone, but low frequencies outside of the zone. Further analysis of DNA sequences showed that almost all of the edulis allelic diversity had introgressed into the M. galloprovincialis background in this patch. We then used a variety of approaches to test the hypothesis that there had been adaptive introgression at mac-1. Simulations and model fitting with maximum-likelihood and approximate Bayesian computation approaches suggested that adaptive introgression could generate a "soft sweep," which was qualitatively consistent with our data. Although the migration rate required was high, it was compatible with the functioning of an effective barrier to gene flow as revealed by demographic inferences. As such, adaptive introgression could explain both the reduced intraspecific differentiation around mac-1 and the high diversity of introgressed alleles, although a localized change in barrier strength may also be invoked. Together, our results emphasize the need to account for the complex history of secondary contacts in interpreting outlier loci.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biplots are graphical displays of data matrices based on the decomposition of a matrix as the product of two matrices. Elements of these two matrices are used as coordinates for the rows and columns of the data matrix, with an interpretation of the joint presentation that relies on the properties of the scalar product. Because the decomposition is not unique, there are several alternative ways to scale the row and column points of the biplot, which can cause confusion amongst users, especially when software packages are not united in their approach to this issue. We propose a new scaling of the solution, called the standard biplot, which applies equally well to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. The standard biplot also handles data matrices with widely different levels of inherent variance. Two concepts taken from correspondence analysis are important to this idea: the weighting of row and column points, and the contributions made by the points to the solution. In the standard biplot one set of points, usually the rows of the data matrix, optimally represent the positions of the cases or sample units, which are weighted and usually standardized in some way unless the matrix contains values that are comparable in their raw form. The other set of points, usually the columns, is represented in accordance with their contributions to the low-dimensional solution. As for any biplot, the projections of the row points onto vectors defined by the column points approximate the centred and (optionally) standardized data. The method is illustrated with several examples to demonstrate how the standard biplot copes in different situations to give a joint map which needs only one common scale on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot readable. The proposal also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Geoelectrical techniques are widely used to monitor groundwater processes, while surprisingly few studies have considered audio (AMT) and radio (RMT) magnetotellurics for such purposes. In this numerical investigation, we analyze to what extent inversion results based on AMT and RMT monitoring data can be improved by (1) time-lapse difference inversion; (2) incorporation of statistical information about the expected model update (i.e., the model regularization is based on a geostatistical model); (3) using alternative model norms to quantify temporal changes (i.e., approximations of l(1) and Cauchy norms using iteratively reweighted least-squares), (4) constraining model updates to predefined ranges (i.e., using Lagrange Multipliers to only allow either increases or decreases of electrical resistivity with respect to background conditions). To do so, we consider a simple illustrative model and a more realistic test case related to seawater intrusion. The results are encouraging and show significant improvements when using time-lapse difference inversion with non l(2) model norms. Artifacts that may arise when imposing compactness of regions with temporal changes can be suppressed through inequality constraints to yield models without oscillations outside the true region of temporal changes. Based on these results, we recommend approximate l(1)-norm solutions as they can resolve both sharp and smooth interfaces within the same model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho, pretende como objectivo geral, contribuir para o conhecimento e aplicação prática do conceito de imparidade de activos, visando aproximar os valores das demonstrações financeiras de uma empresa ao respectivo valor económico. A abordagem é direccionada para o meio empresarial cabo-verdiano onde se pretende chamar a atenção para as mudanças que irão ocorrer a nível contabilístico e fiscal, e em particular no que diz respeito à imparidade de activos. O trabalho foi preparado com base em consulta de bibliografia especializada, de normativos estabelecidos no país e ainda recolha de opinião de profissionais da área. The present work has as general purpose, contribute for the knowledge and practical application of the concept of Impairment of assets, seeking to approximate the values of the financial demonstrations of the companies to the respective economic value. The approach comes to the capeverdian business way, where we intend to alert for what will pass to be accounting practical, the legal and framing in the one that concerns the theme in analysis. The work was prepared with base in consultation of specialized bibliography, of normative established in Cape Verde and still collects of professionals of this area opinion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con este trabajo he pretendido realizar un estudio aproximado a la música del período romántico cuya temática gira en torno al mundo de la noche. Lo he hecho a través de los compositores de música para piano más representativos de la época, con una breve pincelada a la música sinfónica y a las artes representativas. He pretendido demostrar cómo cada compositor reflejó su personalidad a través de un concepto tan abstracto y romántico como es el mundo de la noche.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This brochure identifies each scenic byway route and the approximate mileage in terms of hard-surfaced and gravel roadways. Estimated driving time ranges from one and one-half hour to three and one-half hours, depending on your speed and the number of stops. These routes are offered for those of you who want to relax and stop often to enjoy the sights.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.