992 resultados para implicit techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of mathematical models investigating certain aspects of the complicated process of wound healing are reported in the literature in recent years. However, effective numerical methods and supporting error analysis for the fractional equations which describe the process of wound healing are still limited. In this paper, we consider the numerical simulation of a fractional mathematical model of epidermal wound healing (FMM-EWH), which is based on the coupled advection-diffusion equations for cell and chemical concentration in a polar coordinate system. The space fractional derivatives are defined in the Left and Right Riemann-Liouville sense. Fractional orders in the advection and diffusion terms belong to the intervals (0, 1) or (1, 2], respectively. Some numerical techniques will be used. Firstly, the coupled advection-diffusion equations are decoupled to a single space fractional advection-diffusion equation in a polar coordinate system. Secondly, we propose a new implicit difference method for simulating this equation by using the equivalent of Riemann-Liouville and Grünwald-Letnikov fractional derivative definitions. Thirdly, its stability and convergence are discussed, respectively. Finally, some numerical results are given to demonstrate the theoretical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One group of 12 non learning disabled students and two groups of 12 learning disabled students between the ges of 10 and 12 were measured on implicit and explicit knowledge cquisition. Students in each group implicitly cquired knowledge bout I of 2 vocabulary rules. The vocabulary rules governed the pronunciation of 2 types of pseudowords. After completing the implicit acquisition phase, all groups were administered a test of implicit knowledge. The non learning disabled group and I learning disabled group were then asked to verbalize the knowledge acquired during the initial phase. This was a test of explicit knowledge. All 3 groups were then given a postlest of implicit knowledge. This tcst was a measure of the effectiveness of the employment of the verbalization technique. Results indicate that implicit knowledge capabilities for both the learning disabled and non learning disabled groups were intact. However. there were significant differences between groups on explicit knowledge capabilities. This led to the conclusion that implicit functions show little individual differences, and that explicit functions are affected by ability difference. Furthermore, the employment of the verbalization technique significantly increased POStlest scores for learning disabled students. This suggested that the use of metacognitive techniques was a beneficial learning tool for learning disabled students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This section presents abstracts of three studies on how consumer choices can be influenced by the name letter effect of brands without decision makers being aware of this influence. The first paper examined whether making brand names similar to consumers' names increases the likelihood that consumers will choose the brand. One prediction is that people will prefer and be more likely to choose products or services whose names prominently feature the letters in their own first or last names. The results showed that subjects' preference rankings and evaluations of name letter matching brands were higher than those of non-name letter matching brands. The second paper tested the possibility of using subliminal priming to activate a concept that a persuasive communicator could take advantage of. To examine the idea, two experiments were presented. In the first experiment, participants' level of thirst were manipulated and then subliminally presented them with either thirst-related words or control words. While the manipulations had no effect on participants' self-reported, conscious ratings of thirst, there was a significant interactive effect of the two factors on how much of the drink provided in the taste test was consumed. In a second, follow up experiment, thirsty participants were subliminally presented with either thirst-related words or control words after which they viewed advertisements for two new sports beverages. In conclusion, the research demonstrates that under certain conditions, subliminal printing techniques can enhance persuasion. The third paper hypothesized that the lack of correlations between implicit and explicit evaluations is due to measurement error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we will see how the efficiency of the MBS simulations can be improved in two different ways, by considering both an explicit and implicit semi-recursive formulation. The explicit method is based on a double velocity transformation that involves the solution of a redundant but compatible system of equations. The high computational cost of this operation has been drastically reduced by taking into account the sparsity pattern of the system. Regarding this, the goal of this method is the introduction of MA48, a high performance mathematical library provided by Harwell Subroutine Library. The second method proposed in this paper has the particularity that, depending on the case, between 70 and 85% of the computation time is devoted to the evaluation of forces derivatives with respect to the relative position and velocity vectors. Keeping in mind that evaluating these derivatives can be decomposed into concurrent tasks, the main goal of this paper lies on a successful and straightforward parallel implementation that have led to a substantial improvement with a speedup of 3.2 by keeping all the cores busy in a quad-core processor and distributing the workload between them, achieving on this way a huge time reduction by doing an ideal CPU usage

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La segmentación de imágenes es un campo importante de la visión computacional y una de las áreas de investigación más activas, con aplicaciones en comprensión de imágenes, detección de objetos, reconocimiento facial, vigilancia de vídeo o procesamiento de imagen médica. La segmentación de imágenes es un problema difícil en general, pero especialmente en entornos científicos y biomédicos, donde las técnicas de adquisición imagen proporcionan imágenes ruidosas. Además, en muchos de estos casos se necesita una precisión casi perfecta. En esta tesis, revisamos y comparamos primero algunas de las técnicas ampliamente usadas para la segmentación de imágenes médicas. Estas técnicas usan clasificadores a nivel de pixel e introducen regularización sobre pares de píxeles que es normalmente insuficiente. Estudiamos las dificultades que presentan para capturar la información de alto nivel sobre los objetos a segmentar. Esta deficiencia da lugar a detecciones erróneas, bordes irregulares, configuraciones con topología errónea y formas inválidas. Para solucionar estos problemas, proponemos un nuevo método de regularización de alto nivel que aprende información topológica y de forma a partir de los datos de entrenamiento de una forma no paramétrica usando potenciales de orden superior. Los potenciales de orden superior se están popularizando en visión por computador, pero la representación exacta de un potencial de orden superior definido sobre muchas variables es computacionalmente inviable. Usamos una representación compacta de los potenciales basada en un conjunto finito de patrones aprendidos de los datos de entrenamiento que, a su vez, depende de las observaciones. Gracias a esta representación, los potenciales de orden superior pueden ser convertidos a potenciales de orden 2 con algunas variables auxiliares añadidas. Experimentos con imágenes reales y sintéticas confirman que nuestro modelo soluciona los errores de aproximaciones más débiles. Incluso con una regularización de alto nivel, una precisión exacta es inalcanzable, y se requeire de edición manual de los resultados de la segmentación automática. La edición manual es tediosa y pesada, y cualquier herramienta de ayuda es muy apreciada. Estas herramientas necesitan ser precisas, pero también lo suficientemente rápidas para ser usadas de forma interactiva. Los contornos activos son una buena solución: son buenos para detecciones precisas de fronteras y, en lugar de buscar una solución global, proporcionan un ajuste fino a resultados que ya existían previamente. Sin embargo, requieren una representación implícita que les permita trabajar con cambios topológicos del contorno, y esto da lugar a ecuaciones en derivadas parciales (EDP) que son costosas de resolver computacionalmente y pueden presentar problemas de estabilidad numérica. Presentamos una aproximación morfológica a la evolución de contornos basada en un nuevo operador morfológico de curvatura que es válido para superficies de cualquier dimensión. Aproximamos la solución numérica de la EDP de la evolución de contorno mediante la aplicación sucesiva de un conjunto de operadores morfológicos aplicados sobre una función de conjuntos de nivel. Estos operadores son muy rápidos, no sufren de problemas de estabilidad numérica y no degradan la función de los conjuntos de nivel, de modo que no hay necesidad de reinicializarlo. Además, su implementación es mucho más sencilla que la de las EDP, ya que no requieren usar sofisticados algoritmos numéricos. Desde un punto de vista teórico, profundizamos en las conexiones entre operadores morfológicos y diferenciales, e introducimos nuevos resultados en este área. Validamos nuestra aproximación proporcionando una implementación morfológica de los contornos geodésicos activos, los contornos activos sin bordes, y los turbopíxeles. En los experimentos realizados, las implementaciones morfológicas convergen a soluciones equivalentes a aquéllas logradas mediante soluciones numéricas tradicionales, pero con ganancias significativas en simplicidad, velocidad y estabilidad. ABSTRACT Image segmentation is an important field in computer vision and one of its most active research areas, with applications in image understanding, object detection, face recognition, video surveillance or medical image processing. Image segmentation is a challenging problem in general, but especially in the biological and medical image fields, where the imaging techniques usually produce cluttered and noisy images and near-perfect accuracy is required in many cases. In this thesis we first review and compare some standard techniques widely used for medical image segmentation. These techniques use pixel-wise classifiers and introduce weak pairwise regularization which is insufficient in many cases. We study their difficulties to capture high-level structural information about the objects to segment. This deficiency leads to many erroneous detections, ragged boundaries, incorrect topological configurations and wrong shapes. To deal with these problems, we propose a new regularization method that learns shape and topological information from training data in a nonparametric way using high-order potentials. High-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher order potential defined over many variables is computationally infeasible. We use a compact representation of the potentials based on a finite set of patterns learned fromtraining data that, in turn, depends on the observations. Thanks to this representation, high-order potentials can be converted into pairwise potentials with some added auxiliary variables and minimized with tree-reweighted message passing (TRW) and belief propagation (BP) techniques. Both synthetic and real experiments confirm that our model fixes the errors of weaker approaches. Even with high-level regularization, perfect accuracy is still unattainable, and human editing of the segmentation results is necessary. The manual edition is tedious and cumbersome, and tools that assist the user are greatly appreciated. These tools need to be precise, but also fast enough to be used in real-time. Active contours are a good solution: they are good for precise boundary detection and, instead of finding a global solution, they provide a fine tuning to previously existing results. However, they require an implicit representation to deal with topological changes of the contour, and this leads to PDEs that are computationally costly to solve and may present numerical stability issues. We present a morphological approach to contour evolution based on a new curvature morphological operator valid for surfaces of any dimension. We approximate the numerical solution of the contour evolution PDE by the successive application of a set of morphological operators defined on a binary level-set. These operators are very fast, do not suffer numerical stability issues, and do not degrade the level set function, so there is no need to reinitialize it. Moreover, their implementation is much easier than their PDE counterpart, since they do not require the use of sophisticated numerical algorithms. From a theoretical point of view, we delve into the connections between differential andmorphological operators, and introduce novel results in this area. We validate the approach providing amorphological implementation of the geodesic active contours, the active contours without borders, and turbopixels. In the experiments conducted, the morphological implementations converge to solutions equivalent to those achieved by traditional numerical solutions, but with significant gains in simplicity, speed, and stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comparación de los esquemas de integración temporal explícito e implícito, en la simulación del flujo sanguíneo y su interacción con la pared arterial. There are two major strategies in FSI coupling techniques: implicit and explicit. The general difference between these methodologies is how many times the data is exchanged between the fluid and solid domains at each FSI time-step. In both coupling strategies, the pressure values coming from fluid domain calculations at each time-step are exported to the solid domain, and consequently, the solid domain is analyzed with these imported forces. In contrast to the explicit coupling, in the implicit approach the fluid and solid domain’s data is exchanged several times until the convergence is achieved. Although this method may boost the numerical stabilization, it increases the computational cost due to the extra data exchanges. In cardiovascular simulations, depending on the analysis objectives, one may choose an explicit or implicit approach. In the current work, the advantage of an explicit coupling strategy is highlighted when simulation of pulsatile blood flow in elastic arteries is desired.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using generalized collocation techniques based on fitting functions that are trigonometric (rather than algebraic as in classical integrators), we develop a new class of multistage, one-step, variable stepsize, and variable coefficients implicit Runge-Kutta methods to solve oscillatory ODE problems. The coefficients of the methods are functions of the frequency and the stepsize. We refer to this class as trigonometric implicit Runge-Kutta (TIRK) methods. They integrate an equation exactly if its solution is a trigonometric polynomial with a known frequency. We characterize the order and A-stability of the methods and establish results similar to that of classical algebraic collocation RK methods. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A typical liquid state NMR spectrum is composed of a number of discrete absorptions which can be readily interpreted to yield detailed information about the chemical environment of the nuclei found within the sample. The same cannot be said about the spectra of solid samples. For these the absorptions are typically broad, featureless and yield little information directly. This situation may be further exacerbated by the characteristically long T1 values of nuclei bound within a solid lattice which, consequently, require long inter-sequence delays that necessitate lengthy experiments. This work attempts to address both of these inherent problems. Classically, the resolution of the broad-line spectra of solids into discrete resonances has been achieved by imparting to the sample coherent rotation about specific axes in relation to the polarising magnetic field, as implemented in the magic-angle spinning (MAS) [1], dynamic angle spinning (DAS) [2] and double rotation (DOR) [3] NMR experiments. Recently, an alternative method, sonically induced narrowing of the NMR spectra of solids (SINNMR) [4], has been reported which yields the same well resolved solid-state spectra as the classic solid-state NMR experiments, but which achieves the resolution of the broad-line spectra through the promotion of incoherent motion in a suspension of solid particles. The first part of this work examines SINNMR and, in particular, concentrates on ultrasonically induced evaluation, a phenomenon which is thought to be essential to the incoherent averaging mechanism. The second part of this work extends the principle of incoherent motion, implicit in SINNMR, to a new genre of particulate systems, air fluidized beds, and examines the feasibility of such systems to provide well resolved solid state NMR spectra. Samples of trisodium phosphate dodecahydrate and of aluminium granules are examined using the new method with partially resolved spectra being reported in the case of the latter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this study were to investigate the beliefs concerning the philosophy of science held by practising science teachers and to relate those beliefs to their pupils' understanding of the philosophy of science. Three philosophies of science, differing in the way they relate experimental work to other parts of the scientific enterprise, are described. By the use of questionnaire techniques, teachers of four extreme types were identified. These are: the H type or hypothetico-deductivist teacher, who sees experiments as potential falsifiers of hypotheses or of logical deductions from them; the I type or inductivist teacher, who regards experiments mainly as a way of increasing the range of observations available for recording before patterns are noted and inductive generalisation is carried out; the V type or verificationist teacher, who expects experiments to provide proof and to demonstrate the truth or accuracy of scientific statements; and the 0 type, who has no discernible philosophical beliefs about the nature of science or its methodology. Following interviews of selected teachers to check their responses to the questionnaire and to determine their normal teaching methods, an experiment was organised in which parallel groups were given H, I and V type teaching in the normal school situation during most of one academic year. Using pre-test and post-test scores on a specially developed test of pupil understanding of the philosophy of science, it was shown that pupils were positively affected by their teacher's implied philosophy of science. There was also some indication that V type teaching improved marks obtained in school science examinations, but appeared to discourage the more able from continuing the study of science. Effects were also noted on vocabulary used by pupils to describe scientists and their activities.