820 resultados para Best match
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.
Resumo:
Optimisation of Organic Rankine Cycles (ORCs) for binary-cycle geothermal applications could play a major role in the competitiveness of low to moderate temperature geothermal resources. Part of this optimisation process is matching cycles to a given resource such that power output can be maximised. Two major and largely interrelated components of the cycle are the working fluid and the turbine. Both components need careful consideration. Due to the temperature differences in geothermal resources a one-size-fits-all approach to surface power infrastructure is not appropriate. Furthermore, the traditional use of steam as a working fluid does not seem practical due to the low temperatures of many resources. A variety of organic fluids with low boiling points may be utilised as ORC working fluids in binary power cycle loops. Due to differences in thermodynamic properties, certain fluids are able to extract more heat from a given resource than others over certain temperature and pressure ranges. This enables the tailoring of power cycle infrastructure to best match the geothermal resource through careful selection of the working fluid and turbine design optimisation to yield the optimum overall cycle performance. This paper presents the rationale for the use of radial-inflow turbines for ORC applications and the preliminary design of several radial-inflow turbines based on a selection of promising ORC cycles using five different high-density working fluids: R134a, R143a, R236fa, R245fa and n-Pentane at sub- or trans-critical conditions. Numerous studies published compare a variety of working fluids for various ORC configurations. However, there is little information specifically pertaining to the design and implementation of ORCs using realistic radial turbine designs in terms of pressure ratios, inlet pressure, rotor size and rotational speed. Preliminary 1D analysis leads to the generation of turbine designs for the various cycles with similar efficiencies (77%) but large differences in dimensions (139289 mm rotor diameter). The highest performing cycle (R134a) was found to produce 33% more net power from a 150°C resource flowing at 10 kg/s than the lowest performing cycle (n-Pentane).
Resumo:
We consider the problem of matching people to jobs, where each person ranks a subset of jobs in an order of preference, possibly involving ties. There are several notions of optimality about how to best match each person to a job; in particular, popularity is a natural and appealing notion of optimality. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not adroit popular matchings. This motivates the following extension of the popular rnatchings problem:Given a graph G; = (A boolean OR J, E) where A is the set of people and J is the set of jobs, and a list < c(1), c(vertical bar J vertical bar)) denoting upper bounds on the capacities of each job, does there exist (x(1), ... , x(vertical bar J vertical bar)) such that setting the capacity of i-th, job to x(i) where 1 <= x(i) <= c(i), for each i, enables the resulting graph to admit a popular matching. In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c is 1 or 2.
Resumo:
We consider the problem of matching people to items, where each person ranks a subset of items in an order of preference, possibly involving ties. There are several notions of optimality about how to best match a person to an item; in particular, popularity is a natural and appealing notion of optimality. A matching M* is popular if there is no matching M such that the number of people who prefer M to M* exceeds the number who prefer M* to M. However, popular matchings do not always provide an answer to the problem of determining an optimal matching since there are simple instances that do not admit popular matchings. This motivates the following extension of the popular matchings problem: Given a graph G = (A U 3, E) where A is the set of people and 2 is the set of items, and a list < c(1),...., c(vertical bar B vertical bar)> denoting upper bounds on the number of copies of each item, does there exist < x(1),...., x(vertical bar B vertical bar)> such that for each i, having x(i) copies of the i-th item, where 1 <= xi <= c(i), enables the resulting graph to admit a popular matching? In this paper we show that the above problem is NP-hard. We show that the problem is NP-hard even when each c(i) is 1 or 2. We show a polynomial time algorithm for a variant of the above problem where the total increase in copies is bounded by an integer k. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a general methodology for the synthesis of the external boundary of the workspaces of a planar manipulator with arbitrary topology. Both the desired workspace and the manipulator workspaces are identified by their boundaries and are treated as simple closed polygons. The paper introduces the concept of best match configuration and shows that the corresponding transformation can be obtained by using the concept of shape normalization available in image processing literature. Introduction of the concept of shape in workspace synthesis allows highly accurate synthesis with fewer numbers of design variables. This paper uses a new global property based vector representation for the shape of the workspaces which is computationally efficient because six out of the seven elements of this vector are obtained as a by-product of the shape normalization procedure. The synthesis of workspaces is formulated as an optimization problem where the distance between the shape vector of the desired workspace and that of the workspace of the manipulator at hand are minimized by changing the dimensional parameters of the manipulator. In view of the irregular nature of the error manifold, the statistical optimization procedure of simulated annealing has been used. A number of worked-out examples illustrate the generality and efficiency of the present method. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
With the development of oil and gas exploration, the exploration of the continental oil and gas turns into the exploration of the subtle oil and gas reservoirs from the structural oil and gas reservoirs in China. The reserves of the found subtle oil and gas reservoirs account for more than 60 percent of the in the discovered oil and gas reserves. Exploration of the subtle oil and gas reservoirs is becoming more and more important and can be taken as the main orientation for the increase of the oil and gas reserves. The characteristics of the continental sedimentary facies determine the complexities of the lithological exploration. Most of the continental rift basins in East China have entered exploration stages of medium and high maturity. Although the quality of the seismic data is relatively good, this areas have the characteristics of the thin sand thickness, small faults, small range of the stratum. It requests that the seismic data have high resolution. It is a important task how to improve the signal/noise ratio of the high frequency of seismic data. In West China, there are the complex landforms, the deep embedding the targets of the prospecting, the complex geological constructs, many ruptures, small range of the traps, the low rock properties, many high pressure stratums and difficulties of boring well. Those represent low signal/noise ratio and complex kinds of noise in the seismic records. This needs to develop the method and technique of the noise attenuation in the data acquisition and processing. So that, oil and gas explorations need the high resolution technique of the geophysics in order to solve the implementation of the oil resources strategy for keep oil production and reserves stable in Ease China and developing the crude production and reserves in West China. High signal/noise ratio of seismic data is the basis. It is impossible to realize for the high resolution and high fidelity without the high signal/noise ratio. We play emphasis on many researches based on the structure analysis for improving signal/noise ratio of the complex areas. Several methods are put forward for noise attenuation to truly reflect the geological features. Those can reflect the geological structures, keep the edges of geological construction and improve the identifications of the oil and gas traps. The ideas of emphasize the foundation, give prominence to innovate, and pay attention to application runs through the paper. The dip-scanning method as the center of the scanned point inevitably blurs the edges of geological features, such as fault and fractures. We develop the new dip scanning method in the shap of end with two sides scanning to solve this problem. We bring forward the methods of signal estimation with the coherence, seismic wave characteristc with coherence, the most homogeneous dip-sanning for the noise attenuation using the new dip-scanning method. They can keep the geological characters, suppress the random noise and improve the s/n ratio and resolution. The rutine dip-scanning is in the time-space domain. Anew method of dip-scanning in the frequency-wavenumber domain for the noise attenuation is put forward. It use the quality of distinguishing between different dip events of the reflection in f-k domain. It can reduce the noise and gain the dip information. We describe a methodology for studying and developing filtering methods based on differential equations. It transforms the filtering equations in the frequency domain or the f-k domain into time or time-space domains, and uses a finite-difference algorithm to solve these equations. This method does not require that seismic data be stationary, so their parameters can vary at every temporal and spatial point. That enhances the adaptability of the filter. It is computationally efficient. We put forward a method of matching pursuits for the noise suppression. This method decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. It can extract the effective signal from the noisy signal and reduce the noise. We introduce the beamforming filtering method for the noise elimination. Real seismic data processing shows that it is effective in attenuating multiples and internal multiples. The s/n ratio and resolution are improved. The effective signals have the high fidelity. Through calculating in the theoretic model and applying it to the real seismic data processing, it is proved that the methods in this paper can effectively suppress the random noise, eliminate the cohence noise, and improve the resolution of the seismic data. Their practicability is very better. And the effect is very obvious.
Resumo:
A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclide an space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this clutter-tolerant approach is demonstrated in quantitative experiments with hundreds of real hand images.
Resumo:
PURPOSE: To demonstrate the feasibility of using a knowledge base of prior treatment plans to generate new prostate intensity modulated radiation therapy (IMRT) plans. Each new case would be matched against others in the knowledge base. Once the best match is identified, that clinically approved plan is used to generate the new plan. METHODS: A database of 100 prostate IMRT treatment plans was assembled into an information-theoretic system. An algorithm based on mutual information was implemented to identify similar patient cases by matching 2D beam's eye view projections of contours. Ten randomly selected query cases were each matched with the most similar case from the database of prior clinically approved plans. Treatment parameters from the matched case were used to develop new treatment plans. A comparison of the differences in the dose-volume histograms between the new and the original treatment plans were analyzed. RESULTS: On average, the new knowledge-based plan is capable of achieving very comparable planning target volume coverage as the original plan, to within 2% as evaluated for D98, D95, and D1. Similarly, the dose to the rectum and dose to the bladder are also comparable to the original plan. For the rectum, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are 1.8% +/- 8.5%, -2.5% +/- 13.9%, and -13.9% +/- 23.6%, respectively. For the bladder, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are -5.9% +/- 10.8%, -12.2% +/- 14.6%, and -24.9% +/- 21.2%, respectively. A negative percentage difference indicates that the new plan has greater dose sparing as compared to the original plan. CONCLUSIONS: The authors demonstrate a knowledge-based approach of using prior clinically approved treatment plans to generate clinically acceptable treatment plans of high quality. This semiautomated approach has the potential to improve the efficiency of the treatment planning process while ensuring that high quality plans are developed.
Resumo:
The following work project illustrates the strategic issues There App, a mobile application, faces regarding the opportunity to expand from its current state as a product to a multisided platform. Initially, a market analysis is performed to identify the ideal customer groups to be integrated in the platform. Strategic design issues are then discussed on how to best match its value proposition with the identified market opportunity. Suggestions on how the company should organize its resources and operational processes to best deliver on its value proposition complete the work.
Resumo:
L’augmentation du nombre d’usagers de l’Internet a entraîné une croissance exponentielle dans les tables de routage. Cette taille prévoit l’atteinte d’un million de préfixes dans les prochaines années. De même, les routeurs au cœur de l’Internet peuvent facilement atteindre plusieurs centaines de connexions BGP simultanées avec des routeurs voisins. Dans une architecture classique des routeurs, le protocole BGP s’exécute comme une entité unique au sein du routeur. Cette architecture comporte deux inconvénients majeurs : l’extensibilité (scalabilité) et la fiabilité. D’un côté, la scalabilité de BGP est mesurable en termes de nombre de connexions et aussi par la taille maximale de la table de routage que l’interface de contrôle puisse supporter. De l’autre côté, la fiabilité est un sujet critique dans les routeurs au cœur de l’Internet. Si l’instance BGP s’arrête, toutes les connexions seront perdues et le nouvel état de la table de routage sera propagé tout au long de l’Internet dans un délai de convergence non trivial. Malgré la haute fiabilité des routeurs au cœur de l’Internet, leur résilience aux pannes est augmentée considérablement et celle-ci est implantée dans la majorité des cas via une redondance passive qui peut limiter la scalabilité du routeur. Dans cette thèse, on traite les deux inconvénients en proposant une nouvelle approche distribuée de BGP pour augmenter sa scalabilité ainsi que sa fiabilité sans changer la sémantique du protocole. L’architecture distribuée de BGP proposée dans la première contribution est faite pour satisfaire les deux contraintes : scalabilité et fiabilité. Ceci est accompli en exploitant adéquatement le parallélisme et la distribution des modules de BGP sur plusieurs cartes de contrôle. Dans cette contribution, les fonctionnalités de BGP sont divisées selon le paradigme « maître-esclave » et le RIB (Routing Information Base) est dupliqué sur plusieurs cartes de contrôle. Dans la deuxième contribution, on traite la tolérance aux pannes dans l’architecture élaborée dans la première contribution en proposant un mécanisme qui augmente la fiabilité. De plus, nous prouvons analytiquement dans cette contribution qu’en adoptant une telle architecture distribuée, la disponibilité de BGP sera augmentée considérablement versus une architecture monolithique. Dans la troisième contribution, on propose une méthode de partitionnement de la table de routage que nous avons appelé DRTP pour diviser la table de BGP sur plusieurs cartes de contrôle. Cette contribution vise à augmenter la scalabilité de la table de routage et la parallélisation de l’algorithme de recherche (Best Match Prefix) en partitionnant la table de routage sur plusieurs nœuds physiquement distribués.
Resumo:
This paper evaluates the relationship between the cloud modification factor (CMF) in the ultraviolet erythe- mal range and the cloud optical depth (COD) retrieved from the Aerosol Robotic Network (AERONET) "cloud mode" algorithm under overcast cloudy conditions (confirmed with sky images) at Granada, Spain, mainly for non-precipitating, overcast and relatively homogenous water clouds. Empirical CMF showed a clear exponential dependence on experimental COD values, decreasing approximately from 0.7 for COD=10 to 0.25 for COD=50. In addition, these COD measurements were used as input in the LibRadtran radia tive transfer code allowing the simulation of CMF values for the selected overcast cases. The modeled CMF exhibited a dependence on COD similar to the empirical CMF, but modeled values present a strong underestimation with respect to the empirical factors (mean bias of 22 %). To explain this high bias, an exhaustive comparison between modeled and experimental UV erythemal irradiance (UVER) data was performed. The comparison revealed that the radiative transfer simulations were 8 % higher than the observations for clear-sky conditions. The rest of the bias (~14 %) may be attributed to the substantial underestimation of modeled UVER with respect to experimental UVER under overcast conditions, although the correlation between both dataset was high (R2 ~ 0.93). A sensitive test showed that the main reason responsible for that underestimation is the experimental AERONET COD used as input in the simulations, which has been retrieved from zenith radiances in the visible range. In this sense, effective COD in the erythemal interval were derived from an iteration procedure based on searching the best match between modeled and experimental UVER values for each selected overcast case. These effective COD values were smaller than AERONET COD data in about 80 % of the overcast cases with a mean relative difference of 22 %.
Resumo:
[1] Iron is hypothesized to be an important micronutrient for ocean biota, thus modulating carbon dioxide uptake by the ocean biological pump. Studies have assumed that atmospheric deposition of iron to the open ocean is predominantly from mineral aerosols. For the first time we model the source, transport, and deposition of iron from combustion sources. Iron is produced in small quantities during fossil fuel burning, incinerator use, and biomass burning. The sources of combustion iron are concentrated in the industrialized regions and biomass burning regions, largely in the tropics. Model results suggest that combustion iron can represent up to 50% of the total iron deposited, but over open ocean regions it is usually less than 5% of the total iron, with the highest values (< 30%) close to the East Asian continent in the North Pacific. For ocean biogeochemistry the bioavailability of the iron is important, and this is often estimated by the fraction which is soluble ( Fe(II)). Previous studies have argued that atmospheric processing of the relatively insoluble Fe(III) occurs to make it more soluble ( Fe( II)). Modeled estimates of soluble iron amounts based solely on atmospheric processing as simulated here cannot match the variability in daily averaged in situ concentration measurements in Korea, which is located close to both combustion and dust sources. The best match to the observations is that there are substantial direct emissions of soluble iron from combustion processes. If we assume observed soluble Fe/black carbon ratios in Korea are representative of the whole globe, we obtain the result that deposition of soluble iron from combustion contributes 20-100% of the soluble iron deposition over many ocean regions. This implies that more work should be done refining the emissions and deposition of combustion sources of soluble iron globally.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
A análise da estabilidade mecânica de um poço pode ser feita a partir do cálculo de parâmetros elásticos da formação utilizando a densidade do meio e as velocidades de propagação das ondas compressional e cisalhante na formação rochosa, os quais podem ser obtidos de perfis geofísicos do poço. Em formações sedimentares pouco consolidadas as ferramentas de perfilagem sônica convencionais (monopolares) não conseguem registrar com acuidade a velocidade da onda cisalhante pois a primeira chegada dessa onda é camuflada pela chegada de outras ondas que podem ser mais rápidas que a onda cisalhante num poço perfurado neste tipo de formação. Medidas das velocidades sônicas são feitas em laboratório em amostras da formação, sob condições semelhantes às condições in situ, servindo como ajuste das velocidades registradas no poço pela ferramenta de perfilagem sônica. Para a análise de estabilidade da formação, perfis auxiliares são necessários como o perfil de porosidade, saturação de fluidos e perfis de composição mineralógica da formação rochosa. Exige-se ainda dados de testes de avaliação da formação e de condições do reservatório, mas que são comuns em poços de petróleo, como o teste de formação e os testes de pressurização do poço, tais como o teste de micro-fraturamento hidráulico ou o teste de absorção. A avaliação das tensões principais efetivas que atuam distante do poço, e que não são afetadas pela sua presença, é feita através da associação de um modelo de deformação elástica apropriado e o resultado do teste de pressurização disponível para o poço em estudo. Utilizando resultados clássicos da teoria da elasticidade geral pode-se calcular o campo de tensões modificado na vizinhança da parede do poço devido ao efeito da própria presença do poço ali perfurado e da diferença de pressão existente entre o interior do poço e a formação rochosa. A determinação das propriedades mecânicas da formação a partir das velocidades sônicas e a avaliação do campo de tensões assumindo um modelo elástico de deformação, supõem o meio rochoso no qual as ondas se propagam como elástico, homogêneo e isotrópico. Esta suposição representa a principal aproximação assumida pela metodologia descrita neste trabalho. De posse das propriedades mecânicas da formação e do campo de tensões que age na vizinhança do poço resta definir o critério segundo o qual a rocha sofre instabilidade mecânica quando submetida aquele campo de tensões. Isto permite determinar se, nas condições avaliadas do poço e da formação, haverá quebra da parede do poço por excesso de tensão e, se houver, qual a sua extensão. Assim o problema é como analisar o comportamento mecânico de um poço em uma formação pouco consolidada a partir de perfis geofísicos os quais podem ter problemas no registro das propriedades físicas do meio em formações deste tipo. A metodologia proposta é aplicada a dois intervalos de profundidade pertencentes a dois poços onde arenitos e folhelhos se intercalam e nos quais todos os dados necessários estão disponíveis. Os resultados obtidos mostram que, exceto quando outros mecanismos de quebra da parede do poço agem na formação, a metodologia proposta consegue com sucesso detectar zonas de ocorrência de instabilidade mecânica do poço provocadas por um campo de tensões que excede a resistência mecânica da formação.
Resumo:
Delivering to the customers a product or service with the expected quality associated to the huge competitiveness that exists in the market nowadays, has been making organizations increasingly focus on quality planning using techniques which are directed towards the continuous improvement process and production optimization. Thus, this paper aims to improve a machining process using the techniques of experimental design to the optimization and this also includes the analysis of the measurement system. For this purpose, the alloy Nimonic 80A, a nickel base superalloy, was used in the process due to its widespread use for high temperatures, applying this study the robust method proposed by Genichi Taguchi, determining the influence of the factors considered input variables, cutting speed, feed rate, depth of cut, type of tool, lubrication, and material hardness, in the output or response variable, surface roughness, concluding with the use of Taguchi orthogonal array L16 and by analysis of ANOVA that the factor feed rate is significant and offers greater effect on the response variable studied, should be set to 0,12mm/rev. Moreover, the factor type of tool has more influence on the process when compared to other factors, being CP250 the one more suitable to the process. Lastly, the interaction feed rate x cutting speed provides greater significance in the process regarding to surface roughness variable, being the best match between them 0,12mm/rev to the feed rate and 90m/min to the cutting speed. In order to evaluate the measurement system, it was applied the repeatability and reproducibility method (R&R), through which we saw that the system needs improvement (R & R = 88.04% >>> 30%) as the value found in the study was well above compared to the one that classifies the system as inappropriate