997 resultados para 2D Bar Code


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became possible to incorporate it in a mobile robot. In visual odometry feature based approaches, the model estimation requires absence of feature association outliers for an accurate motion. Outliers rejection is a delicate process considering there is always a trade-off between speed and reliability of the system. This dissertation proposes an indoor 2D position system using Visual Odometry. The mobile robot has a camera pointed to the ceiling, for image analysis. As requirements, the ceiling and the oor (where the robot moves) must be planes. In the literature, RANSAC is a widely used method for outlier rejection. However, it might be slow in critical circumstances. Therefore, it is proposed a new algorithm that accelerates RANSAC, maintaining its reliability. The algorithm, called FMBF, consists on comparing image texture patterns between pictures, preserving the most similar ones. There are several types of comparisons, with different computational cost and reliability. FMBF manages those comparisons in order to optimize the trade-off between speed and reliability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho apresenta-se uma metodologia protótipo destinada a efectuar a modelação automática a 2D da morfologia de mineralizações filonianas. Em primeiro lugar procede-se à estimação do número de ocorrências de filões em cada bloco, de uma malha de blocos, em que se subdividiu o volume em estudo. Esta intensidade de ocorrências é quantificada por uma variável que representa o número de filões por metro linear(NFM) intersectados por uma sondagem vertical, e constitui o objectivo de intensidade a atingir. Seguidamente são gerados por simulação, linhas poligonais ou arcos correspondentes ao posicionamento dos filões no perfil. Estes arcos ligam alguns pares de intersecções de filões amostrados nas sondagens e são escolhidos ao acaso segundo regras de orientação e distância. Seguidamente faz-se a avaliação da intensidade local de filões do modelo e, para os locais onde exista défice de filões relativamente ao objectivo, são adicionadas intersecções virtuais, ou seja, que não foram reconhecidas por sondagens. Este procedimento continua até o modelo dos filões se aproximar do objectivo previamente definido. O conjunto dos arcos em cada perfil, associado às espessuras de intersecção observadas nas sondagens, constitui um modelo morfológico dos filões em estrutura vectorial. No final faz-se a avaliação quantitativa do modelo e da incerteza. Os dados do estudo prático que motivaram o desenvolvimento da metodologia foram recolhidos no depósito mineral das Minas da Panasqueira. Os resultados obtidos no estudo mostram que a introdução de uma metodologia automática para a modelação vectorial de filões mineralizados é uma mais-valia, porque permite gerar modelos mais realistas e com melhor controlo geológico e resolução do que as abordagens clássicas de possança e acumulação, constituindo um auxiliar precioso de avaliação de reservas de minério.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rupture of aortic aneurysms (AA) is a major cause of death in the Western world. Currently, clinical decision upon surgical intervention is based on the diameter of the aneurysm. However, this method is not fully adequate. Noninvasive assessment of the elastic properties of the arterial wall can be a better predictor for AA growth and rupture risk. The purpose of this study is to estimate mechanical properties of the aortic wall using in vitro inflation testing and 2D ultrasound (US) elastography, and investigate the performance of the proposed methodology for physiological conditions. Two different inflation experiments were performed on twelve porcine aortas: 1) a static experiment for a large pressure range (0 – 140 mmHg); 2) a dynamic experiment closely mimicking the in vivo hemodynamics at physiological pressures (70 – 130 mmHg). 2D raw radiofrequency (RF) US datasets were acquired for one longitudinal and two cross-sectional imaging planes, for both experiments. The RF-data were manually segmented and a 2D vessel wall displacement tracking algorithm was applied to obtain the aortic diameter–time behavior. The shear modulus G was estimated assuming a Neo-Hookean material model. In addition, an incremental study based on the static data was performed to: 1) investigate the changes in G for increasing mean arterial pressure (MAP), for a certain pressure difference (30, 40, 50 and 60 mmHg); 2) compare the results with those from the dynamic experiment, for the same pressure range. The resulting shear modulus G was 94 ± 16 kPa for the static experiment, which is in agreement with literature. A linear dependency on MAP was found for G, yet the effect of the pressure difference was negligible. The dynamic data revealed a G of 250 ± 20 kPa. For the same pressure range, the incremental shear modulus (Ginc) was 240 ± 39 kPa, which is in agreement with the former. In general, for all experiments, no significant differences in the values of G were found between different image planes. This study shows that 2D US elastography of aortas during inflation testing is feasible under controlled and physiological circumstances. In future studies, the in vivo, dynamic experiment should be repeated for a range of MAPs and pathological vessels should be examined. Furthermore, the use of more complex material models needs to be considered to describe the non-linear behavior of the vascular tissue.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION:With the ease provided by current computational programs, medical and scientific journals use bar graphs to describe continuous data.METHODS:This manuscript discusses the inadequacy of bars graphs to present continuous data.RESULTS:Simulated data show that box plots and dot plots are more-feasible tools to describe continuous data.CONCLUSIONS:These plots are preferred to represent continuous variables since they effectively describe the range, shape, and variability of observations and clearly identify outliers. By contrast, bar graphs address only measures of central tendency. Bar graphs should be used only to describe qualitative data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present case-study concerns about the analysis of the sale of Banif Mais, the sub-holding of Banif Group for specialized credit activity, taking into account the bank’s financial situation in 2014. In 2011, Portugal was submitted to an external finance programme carried out by troika which imposed very restricted measures to the financial sector. Subsequently, Banif was not able to accomplish the required results having to appeal to Government financing, being under a recapitalization plan since 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work was supported by FCT (Fundação para a Ciência e Tecnologia) within Project Scope (UID/CEC/00319/2013), by LIP (Laboratório de Instrumentação e Física Experimental de Partículas) and by Project Search-ON2 (NORTE-07-0162- FEDER-000086), co-funded by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework, through the European Regional Development Fund.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The normalized differential cross section for top-quark pair production in association with at least one jet is studied as a function of the inverse of the invariant mass of the tt¯+1-jet system. This distribution can be used for a precise determination of the top-quark mass since gluon radiation depends on the mass of the quarks. The experimental analysis is based on proton--proton collision data collected by the ATLAS detector at the LHC with a centre-of-mass energy of 7 TeV corresponding to an integrated luminosity of 4.6 fb−1. The selected events were identified using the lepton+jets top-quark-pair decay channel, where lepton refers to either an electron or a muon. The observed distribution is compared to a theoretical prediction at next-to-leading-order accuracy in quantum chromodynamics using the pole-mass scheme. With this method, the measured value of the top-quark pole mass, mpolet, is: mpolet =173.7 ± 1.5 (stat.) ± 1.4 (syst.) +1.0−0.5 (theory) GeV. This result represents the most precise measurement of the top-quark pole mass to date.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A search for new particles that decay into top quark pairs is reported. The search is performed with the ATLAS experiment at the LHC using an integrated luminosity of 20.3 fb−1 of proton-proton collision data collected at a centre-of-mass energy of s√=8 TeV. The lepton-plus-jets final state is used, where the top pair decays to W+bW−b¯¯, with one W boson decaying leptonically and the other hadronically. The invariant mass spectrum of top quark pairs is examined for local excesses or deficits that are inconsistent with the Standard Model predictions. No evidence for a top quark pair resonance is found, and 95% confidence-level limits on the production rate are determined for massive states in benchmark models. The upper limits on the cross-section times branching ratio of a narrow Z′ boson decaying to top pairs range from 4.2 pb to 0.03 pb for resonance masses from 0.4 TeV to 3.0 TeV. A narrow leptophobic topcolour Z′ boson with mass below 1.8 TeV is excluded. Upper limits are set on the cross-section times branching ratio for a broad colour-octet resonance with Γ/m = 15% decaying to tt¯. These range from 4.8 pb to 0.03 pb for masses from 0.4 TeV to 3.0 TeV. A Kaluza-Klein excitation of the gluon in a Randall-Sundrum model is excluded for masses below 2.2 TeV.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The distribution and orientation of energy inside jets is predicted to be an experimental handle on colour connections between the hard--scatter quarks and gluons initiating the jets. This Letter presents a measurement of the distribution of one such variable, the jet pull angle. The pull angle is measured for jets produced in tt¯ events with one W boson decaying leptonically and the other decaying to jets using 20.3 fb−1 of data recorded with the ATLAS detector at a centre--of--mass energy of s√=8 TeV at the LHC. The jet pull angle distribution is corrected for detector resolution and acceptance effects and is compared to various models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El volumen de datos provenientes de experimentos basados en genómica y poteómica es grande y de estructura compleja. Solo a través de un análisis bioinformático/bioestadístico eficiente es posible identificar y caracterizar perfiles de expresión de genes y proteínas que se expresan en forma diferencial bajo distintas condiciones experimentales (CE). El objetivo principal es extender las capacidades computacionales y analíticos de los softwares disponibles de análisis de este tipo de datos, en especial para aquellos aplicables a datos de electroforésis bidimensional diferencial (2D-DIGE). En DIGE el método estadístico más usado es la prueba t de Student cuya aplicación presupone una única fuente de variación y el cumplimiento de ciertos supuestos distribucionales de los datos (como independencia y homogeneidad de varianzas), los cuales no siempre se cumplen en la práctica, pudiendo conllevar a errores en las estimaciones e inferencias de los efectos de interés. Los modelos Generalizados lineales mixtos (GLMM) permiten no solo incorporar los efectos que, se asume, afectan la variación de la respuesta sino que también modelan estructuras de covarianzas y de correlaciones más afines a las que se presentan en la realidad, liberando del supuesto de independencia y de normalidad. Estos modelos, más complejos en esencia, simplificará el análisis debido a la modelización directa de los datos crudos sin la aplicación de transformaciones para lograr distribuciones más simétricas. Produciendo también a una estimación estadísticamente más eficiente de los efectos presentes y por tanto a una detección más certera de los genes/ proteínas involucrados en procesos biológicos de interés. La característica relevante de esta tecnología es que no se conoce a priori cuáles son las proteínas presentes. Estas son identificadas mediante otras técnicas más costosas una vez que se detectó un conjunto de manchas diferenciales sobre los geles 2DE. Por ende disminuir los falsos positivos es fundamental en la identificación de tales manchas ya que inducen a resultados erróneas y asociaciones biológica ficticias. Esto no solo se logrará mediante el desarrollo de técnicas de normalización que incorporen explícitamente las CE, sino también con el desarrollo de métodos que permitan salirse del supuesto de gaussianidad y evaluar otros supuestos distribucionales más adecuados para este tipo de datos. También, se desarrollarán técnicas de aprendizaje automática que mediante optimización de funciones de costo específicas nos permitan identificar el subconjunto de proteínas con mayor potencialidad diagnóstica. Este proyecto tiene una alta componente estadístico/bioinformática, pero creemos que es el campo de aplicación, es decir la genómica y la proteómica, los que mas se beneficiarán con los resultados esperados. Para tal fin se utilizarán diversas bases de datos de distintos experimentos provistos por distintos centros de investigación nacionales e internacionales

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El volumen de datos provenientes de experimentos basados en genómica y poteómica es grande y de estructura compleja. Solo a través de un análisis bioinformático/bioestadístico eficiente es posible identificar y caracterizar perfiles de expresión de genes y proteínas que se expresan en forma diferencial bajo distintas condiciones experimentales (CE). El objetivo principal es extender las capacidades computacionales y analíticos de los softwares disponibles de análisis de este tipo de datos, en especial para aquellos aplicables a datos de electroforésis bidimensional diferencial (2D-DIGE). En DIGE el método estadístico más usado es la prueba t de Student cuya aplicación presupone una única fuente de variación y el cumplimiento de ciertos supuestos distribucionales de los datos (como independencia y homogeneidad de varianzas), los cuales no siempre se cumplen en la práctica, pudiendo conllevar a errores en las estimaciones e inferencias de los efectos de interés. Los modelos Generalizados lineales mixtos (GLMM) permiten no solo incorporar los efectos que, se asume, afectan la variación de la respuesta sino que también modelan estructuras de covarianzas y de correlaciones más afines a las que se presentan en la realidad, liberando del supuesto de independencia y de normalidad. Estos modelos, más complejos en esencia, simplificarán el análisis debido a la modelización directa de los datos crudos sin la aplicación de transformaciones para lograr distribuciones más simétricas,produciendo también a una estimación estadísticamente más eficiente de los efectos presentes y por tanto a una detección más certera de los genes/proteínas involucrados en procesos biológicos de interés. La característica relevante de esta tecnología es que no se conoce a priori cuáles son las proteínas presentes. Estas son identificadas mediante otras técnicas más costosas una vez que se detectó un conjunto de manchas diferenciales sobre los geles 2DE. Por ende disminuir los falsos positivos es fundamental en la identificación de tales manchas ya que inducen a resultados erróneas y asociaciones biológica ficticias. Esto no solo se logrará mediante el desarrollo de técnicas de normalización que incorporen explícitamente las CE, sino también con el desarrollo de métodos que permitan salirse del supuesto de gaussianidad y evaluar otros supuestos distribucionales más adecuados para este tipo de datos. También, se desarrollarán técnicas de aprendizaje automática que mediante optimización de funciones de costo específicas nos permitan identificar el subconjunto de proteínas con mayor potencialidad diagnóstica. Este proyecto tiene un alto componente estadístico/bioinformática, pero creemos que es el campo de aplicación, es decir la genómica y la proteómica, los que más se beneficiarán con los resultados esperados. Para tal fin se utilizarán diversas bases de datos de distintos experimentos provistos por distintos centros de investigación nacionales e internacionales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surgeons may use a number of cutting instruments such as osteotomes and chisels to cut bone during an operative procedure. The initial loading of cortical bone during the cutting process results in the formation of microcracks in the vicinity of the cutting zone with main crack propagation to failure occuring with continued loading. When a material cracks, energy is emitted in the form of Acoustic Emission (AE) signals that spread in all directions, therefore, AE transducers can be used to monitor the occurrence and development of microcracking and crack propagation in cortical bone. In this research, number of AE signals (hits) and related parameters including amplitude, duration and absolute energy (abs-energy) were recorded during the indentation cutting process by a wedge blade on cortical bone specimens. The cutting force was also measured to correlate between load-displacement curves and the output from the AE sensor. The results from experiments show AE signals increase substantially during the loading just prior to fracture between 90% and 100% of maximum fracture load. Furthermore, an amplitude threshold value of 64dB (with approximate abs-energy of 1500 aJ) was established to saparate AE signals associated with microcracking (41 – 64dB) from fracture related signals (65 – 98dB). The results also demonstrated that the complete fracture event which had the highest duration value can be distinguished from other growing macrocracks which did not lead to catastrophic fracture. It was observed that the main crack initiation may be detected by capturing a high amplitude signal at a mean load value of 87% of maximum load and unsteady crack propagation may occur just prior to final fracture event at a mean load value of 96% of maximum load. The author concludes that the AE method is useful in understanding the crack initiation and fracture during the indentation cutting process.