938 resultados para Quasi-analytical algorithms
Resumo:
The Mathematica system (version 4.0) is employed in the solution of nonlinear difusion and convection-difusion problems, formulated as transient one-dimensional partial diferential equations with potential dependent equation coefficients. The Generalized Integral Transform Technique (GITT) is first implemented for the hybrid numerical-analytical solution of such classes of problems, through the symbolic integral transformation and elimination of the space variable, followed by the utilization of the built-in Mathematica function NDSolve for handling the resulting transformed ODE system. This approach ofers an error-controlled final numerical solution, through the simultaneous control of local errors in this reliable ODE's solver and of the proposed eigenfunction expansion truncation order. For covalidation purposes, the same built-in function NDSolve is employed in the direct solution of these partial diferential equations, as made possible by the algorithms implemented in Mathematica (versions 3.0 and up), based on application of the method of lines. Various numerical experiments are performed and relative merits of each approach are critically pointed out.
Resumo:
In two-phase miniature and microchannel flows, the meniscus shape must be considered due to effects that are affected by condensation and/or evaporation and coupled with the transport phenomena in the thin film on the microchannel wall, when capillary forces drive the working fluid. This investigation presents an analytical model for microchannel condensers with a porous boundary, where capillary forces pump the fluid. Methanol was selected as the working fluid. Very low liquid Reynolds numbers were obtained (Re~6), but very high Nusselt numbers (Nu~150) could be found due to the channel size (1.5 mm) and the presence of the porous boundary. The meniscus calculation provided consistent results for the vapor interface temperature and pressure, as well as the meniscus curvature. The obtained results show that microchannel condensers with a porous boundary can be used for heat dissipation with reduced heat transfer area and very high heat dissipation capabilities.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
Paper-based analytical technologies enable quantitative and rapid analysis of analytes from various application areas including healthcare, environmental monitoring and food safety. Because paper is a planar, flexible and light weight substrate, the devices can be transported and disposed easily. Diagnostic devices are especially valuable in resourcelimited environments where diagnosis as well as monitoring of therapy can be made even without electricity by using e.g. colorimetric assays. On the other hand, platforms including printed electrodes can be coupled with hand-held readers. They enable electrochemical detection with improved reliability, sensitivity and selectivity compared with colorimetric assays. In this thesis, different roll-to-roll compatible printing technologies were utilized for the fabrication of low-cost paper-based sensor platforms. The platforms intended for colorimetric assays and microfluidics were fabricated by patterning the paper substrates with hydrophobic vinyl substituted polydimethylsiloxane (PDMS) -based ink. Depending on the barrier properties of the substrate, the ink either penetrates into the paper structure creating e.g. microfluidic channel structures or remains on the surface creating a 2D analog of a microplate. The printed PDMS can be cured by a roll-ro-roll compatible infrared (IR) sintering method. The performance of these platforms was studied by printing glucose oxidase-based ink on the PDMS-free reaction areas. The subsequent application of the glucose analyte changed the colour of the white reaction area to purple with the colour density and intensity depending on the concentration of the glucose solution. Printed electrochemical cell platforms were fabricated on paper substrates with appropriate barrier properties by inkjet-printing metal nanoparticle based inks and by IR sintering them into conducting electrodes. Printed PDMS arrays were used for directing the liquid analyte onto the predetermined spots on the electrodes. Various electrochemical measurements were carried out both with the bare electrodes and electrodes functionalized with e.g. self assembled monolayers. Electrochemical glucose sensor was selected as a proof-of-concept device to demonstrate the potential of the printed electronic platforms.
Resumo:
Cardiac troponin (cTn) I and T are the recommended biomarkers for the diagnosis and risk stratification of patients with suspected acute coronary syndrome (ACS), a major cause of cardiovascular death and disability worldwide. It has recently been demonstrated that cTn-specific autoantibodies (cTnAAb) can negatively interfere with cTnI detection by immunoassays to the extent that cTnAAb-positive patients may be falsely designated as cTnI-negative. The aim of this thesis was to develop and optimize immunoassays for the detection of both cTnI and cTnAAb, which would eventually enable exploring the clinical impact of these autoantibodies on cTnI testing and subsequent patient management. The extent of cTnAAb interference in different cTnI assay configurations and the molecular characteristics of cTnAAbs were investigated in publications I and II, respectively. The findings showed that cTnI midfragment targeting immunoassays used predominantly in clinical practice are affected by cTnAAb interference which can be circumvented by using a novel 3+1-type assay design with three capture antibodies against the N-terminus, midfragment and C-terminus and one tracer antibody against the C-terminus. The use of this assay configuration was further supported by the epitope specificity study, which showed that although the midfragment is most commonly targeted by cTnAAbs, the interference basically encompasses the whole molecule, and there may be remarkable individual variation at the affected sites. In publications III and IV, all the data obtained in previous studies were utilized to develop an improved version of an existing cTnAAb assay and a sensitive cTnI assay free of this specific analytical interference. The results of the thesis showed that approximately one in 10 patients with suspected ACS have detectable amounts of cTnAAbs in their circulation and that cTnAAbs can inhibit cTnI determination when targeted against the binding sites of assay antibodies used in its immunological detection. In the light of these observations, the risk of clinical misclassification caused by the presence of cTnAAbs remains a valid and reasonable concern. Because the titers, affinities and epitope specificities of cTnAAbs and the concentration of endogenous cTnI determine the final effect of circulating cTnAAbs, appropriately sized studies on their clinical significance are warranted. The new cTnI and cTnAAb assays could serve as analytical tools for establishing the impact of cTnAAbs on cTnI testing and also for unraveling the etiology of cTn-related autoimmune responses.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
Recent advances have raised hope that transplantation of adherent somatic cells could provide dramatic new therapies for various diseases. However, current methods for transplanting adherent somatic cells are not efficient enough for therapeutic applications. Here, we report the development of a novel method to generate quasi-natural cell blocks for high-efficiency transplantation of adherent somatic cells. The blocks were created by providing a unique environment in which cultured cells generated their own extracellular matrix. Initially, stromal cells isolated from mice were expanded in vitro in liquid cell culture medium followed by transferring the cells into a hydrogel shell. After incubation for 1 day with mechanical agitation, the encapsulated cell mass was perforated with a thin needle and then incubated for an additional 6 days to form a quasi-natural cell block. Allograft transplantation of the cell block into C57BL/6 mice resulted in perfect adaptation of the allograft and complete integration into the tissue of the recipient. This method could be widely applied for repairing damaged cells or tissues, stem cell transplantation, ex vivo gene therapy, or plastic surgery.
Resumo:
Arkit: A-B8.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.