983 resultados para Calibration plot
Resumo:
In cameras with radial distortion, straight lines in space are in general mapped to curves in the image. Although epipolar geometry also gets distorted, there is a set of special epipolar lines that remain straight, namely those that go through the distortion center. By finding these straight epipolar lines in camera pairs we can obtain constraints on the distortion center(s) without any calibration object or plumbline assumptions in the scene. Although this holds for all radial distortion models we conceptually prove this idea using the division distortion model and the radial fundamental matrix which allow for a very simple closed form solution of the distortion center from two views (same distortion) or three views (different distortions). The non-iterative nature of our approach makes it immune to local minima and allows finding the distortion center also for cropped images or those where no good prior exists. Besides this, we give comprehensive relations between different undistortion models and discuss advantages and drawbacks.
Resumo:
Background: A common task in analyzing microarray data is to determine which genes are differentially expressed across two (or more) kind of tissue samples or samples submitted under experimental conditions. Several statistical methods have been proposed to accomplish this goal, generally based on measures of distance between classes. It is well known that biological samples are heterogeneous because of factors such as molecular subtypes or genetic background that are often unknown to the experimenter. For instance, in experiments which involve molecular classification of tumors it is important to identify significant subtypes of cancer. Bimodal or multimodal distributions often reflect the presence of subsamples mixtures. Consequently, there can be genes differentially expressed on sample subgroups which are missed if usual statistical approaches are used. In this paper we propose a new graphical tool which not only identifies genes with up and down regulations, but also genes with differential expression in different subclasses, that are usually missed if current statistical methods are used. This tool is based on two measures of distance between samples, namely the overlapping coefficient (OVL) between two densities and the area under the receiver operating characteristic (ROC) curve. The methodology proposed here was implemented in the open-source R software. Results: This method was applied to a publicly available dataset, as well as to a simulated dataset. We compared our results with the ones obtained using some of the standard methods for detecting differentially expressed genes, namely Welch t-statistic, fold change (FC), rank products (RP), average difference (AD), weighted average difference (WAD), moderated t-statistic (modT), intensity-based moderated t-statistic (ibmT), significance analysis of microarrays (samT) and area under the ROC curve (AUC). On both datasets all differentially expressed genes with bimodal or multimodal distributions were not selected by all standard selection procedures. We also compared our results with (i) area between ROC curve and rising area (ABCR) and (ii) the test for not proper ROC curves (TNRC). We found our methodology more comprehensive, because it detects both bimodal and multimodal distributions and different variances can be considered on both samples. Another advantage of our method is that we can analyze graphically the behavior of different kinds of differentially expressed genes. Conclusion: Our results indicate that the arrow plot represents a new flexible and useful tool for the analysis of gene expression profiles from microarrays.
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.
Resumo:
An electrochemical method is proposed for the determination of maltol in food. Microwave-assisted extraction procedures were developed to assist sample pre-treating steps. Experiments carried out in cyclic voltammetry showed an irreversible and adsorption controlled reduction of maltol. A cathodic peak was observed at -1.0 V for a Hanging Mercury Drop Electrode versus an AgCl/Ag (in saturated KCl), and the peak potential was pH independent. Square wave voltammetric procedures were selected to plot calibration curves. These procedures were carried out with the optimum conditions: pH 6.5; frequency 50 Hz; deposition potential 0.6 V; and deposition time 10 s. A linear behaviour was observed within 5.0 × 10-8 and 3.5 × 10-7 M. The proposed method was applied to the analysis of cakes, and results were compared with those obtained by an independent method. The voltammetric procedure was proven suitable for the analysis of cakes and provided environmental and economical advantages, including reduced toxicity and volume of effluents and decreased consumption of reagents.
Resumo:
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.
Resumo:
Electronics Letters Vol.38, nº 19
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 232 – 235, Seattle, EUA
Resumo:
Proceedings of IEEE, ISCAS 2003, Vol.I, pp. 877-880
Resumo:
For the purpose of research a large quantity of anti-measles IgG working reference serum was needed. A pool of sera from five teenagers was prepared and named Alexandre Herculano (AH). In order to calibrate the AH serum, 18 EIA assays were performed testing in parallel AH and the 2nd International Standard 1990, Anti-Measles Antibody, 66/202 (IS) in a range of dilutions (from 1/50 to 1/25600). A method which compared parallel lines resulting from the graphic representation of the results of laboratory tests was used to estimate the power of AH relative to IS. A computer programme written by one of the authors was used to analyze the data and make potency estimates. Another method of analysis was used, comparing logistic curves relating serum concentrations with optical density by EIA. For that purpose an existing computer programme (WRANL) was used. The potency of AH relative to IS, by either method, was estimated to be 2.4. As IS has 5000 milli international units (mIU) of anti-measles IgG per millilitre (ml), we concluded that AH has 12000 mIU/ml.
Resumo:
Esta dissertação descreve o desenvolvimento e avaliação de um procedimento de \Numerical Site Calibration" (NSC) para um Parque Eólico, situado a sul de Portugal, usando Dinâmica de Fluídos Computacional (CFD). O NSC encontra-se baseado no \Site Calibration" (SC), sendo este um método de medição padronizado pela Comissão Electrónica Internacional através da norma IEC 61400. Este método tem a finalidade de quantificar e reduzir os efeitos provocados pelo terreno e por possíveis obstáculos, na medição do desempenho energético das turbinas eólicas. Assim, no SC são realizadas medições em dois pontos, no mastro referência e no local da turbina (mastro temporário). No entanto, em Parques Eólicos já construídos, este método não é aplicável visto ser necessária a instalação de um mastro de medição no local da turbina e, por conseguinte, o procedimento adequado para estas circunstâncias é o NSC. O desenvolvimento deste método é feito por um código CFD, desenvolvido por uma equipa de investigação do Instituto Superior de Engenharia do Porto, designado de WINDIETM, usado extensivamente pela empresa Megajoule Inovação, Lda em aplicações de energia eólica em todo mundo. Este código é uma ferramenta para simulação de escoamentos tridimensionais em terrenos complexos. As simulações do escoamento são realizadas no regime transiente utilizando as equações de Navier-Stokes médias de Reynolds com aproximação de Bussinesq e o modelo de turbulência TKE 1.5. As condições fronteira são provenientes dos resultados de uma simulação realizada com Weather Research and Forecasting, WRF. Estas simulações dividem-se em dois grupos, um dos conjuntos de simulações utiliza o esquema convectivo Upwind e o outro utiliza o esquema convectivo de 4aordem. A análise deste método é realizada a partir da comparação dos dados obtidos nas simulações realizadas no código WINDIETM e a coleta de dados medidos durante o processo SC. Em suma, conclui-se que o WINDIETM e as suas configurações reproduzem bons resultados de calibração, ja que produzem erros globais na ordem de dois pontos percentuais em relação ao SC realizado para o mesmo local em estudo.
Resumo:
Oceans - San Diego, 2013
Resumo:
This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.
Resumo:
The process of visually exploring underwater environments is still a complex problem. Underwater vision systems require complementary means of sensor information to help overcome water disturbances. This work proposes the development of calibration methods for a structured light based system consisting on a camera and a laser with a line beam. Two different calibration procedures that require only two images from different viewpoints were developed and tested in dry and underwater environments. Results obtained show, an accurate calibration for the camera/projector pair with errors close to 1 mm even in the presence of a small stereos baseline.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores