930 resultados para Edge detectors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study evaluated the safety impact of the Safety Edge for construction projects in 2010 and 2011 in Iowa to assess the effectiveness of the treatment in reducing crashes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quenched and tempered high-speed steels obtained by powder metallurgy are commonly used in automotive components, such as valve seats of combustion engines. In order to machine these components, tools with high wear resistance and appropriate cutting edge geometry are required. This work aims to investigate the influence of the edge preparation of polycrystalline cubic boron nitride (PCBN) tools on the wear behavior in the orthogonal longitudinal turning of quenched and tempered M2 high-speed steels obtained by powder metallurgy. For this research, PCBN tools with high and low-CBN content have been used. Two different cutting edge geometries with a honed radius were tested: with a ground land (S shape) and without it (E shape). Also, the cutting speed was varied from 100 to 220 m/min. A rigid CNC lathe was used. The results showed that the high-CBN, E-shaped tool presented the longest life for a cutting speed of 100 m/min. High-CBN tools with a ground land and honed edge radius (S shaped) showed edge damage and lower values of the tool’s life. Low-CBN, S-shaped tools showed similar results, but with an inferior performance when compared with tools with high CBN content in both forms of edge preparation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sharp edges were first used for field ionisation mass spectrometry by Beckey. Although Cross and Robertson found that etched metal foils were more effective than razor blades for field ionisation, blades are very convenient for determination of field ionisation mass spectra, as reported by Robertson and Viney. The electric field at the vertex of a sharp edge can be calculated by the method of the conformal transformation. Here we give some equations for the field deduced with the assumption that the edge surface can be approximated by a hyperbola. We also compare two hyperbolae with radii of curvature at the vertex of 500 Angstrom and 1000 Angstrom with the profile of a commercial carbon-steel razor blade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the presumption that a sharp edge may be represented by a hyperbola, a conformal transformation method is used to derive electric field equations for a sharp edge suspended above a flat plate. A further transformation is then introduced to give electric field components for a sharp edge suspended above a thin slit. Expressions are deduced for the field strength at the vertex of the edge in both arrangements. The calculated electric field components are used to compute ion trajectories in the simple edge/flat-plate case. The results are considered in relation to future study of ion focusing and unimolecular decomposition of ions in field ionization mass spectrometers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sharpening is a powerful image transformation because sharp edges can bring out image details. Sharpness is achieved by increasing local contrast and reducing edge widths. We present a method that enhances sharpness of images and thereby their perceptual quality. Most existing enhancement techniques require user input to improve the perception of the scene in a manner most pleasing to the particular user. Our goal of image enhancement is to improve the perception of sharpness in digital images for human viewers. We consider two parameters in order to exaggerate the differences between local intensities. The two parameters exploit local contrast and widths of edges. We start from the assumption that color, texture, or objects of focus such as faces affect the human perception of photographs. When human raters are presented with a collection of images with different sharpness and asked to rank them according to perceived sharpness, the results have shown that there is a statistical consensus among the raters. We introduce a ramp enhancement technique by modifying the optimal overshoot in the ramp for different region contrasts as well as the new ramp width. Optimal parameter values are searched to be applied to regions under the criteria mentioned above. In this way, we aim to enhance digital images automatically to create pleasing image output for common users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous applications within the mid- and long-wavelength infrared are driving the search for efficient and cost effective detection technologies in this regime. Theoretical calculations have predicted high performance for InAs/GaSb type-II superlattice structures, which rely on mature growth of III-V semiconductors and offer many levels of freedom in design due to band structure engineering. This work focuses on the fabrication and characterization of type-II superlattice infrared detectors. Standard UV-based photolithography was used combined with chemical wet or dry etching techniques in order to fabricate antinomy-based type-II superlattice infrared detectors. Subsequently, Fourier transform infrared spectroscopy and radiometric techniques were applied for optical characterization in order to obtain a detector's spectrum and response, as well as the overall detectivity in combination with electrical characterization. Temperature dependent electrical characterization was used to extract information about the limiting dark current processes. This work resulted in the first demonstration of an InAs/GaSb type-II superlattice infrared photodetector grown by metalorganic chemical vapor deposition. A peak detectivity of 1.6x10^9 Jones at 78 K was achieved for this device with a 11 micrometer zero cutoff wavelength. Furthermore the interband tunneling detector designed for the mid-wavelength infrared regime was studied. Similar results to those previously published were obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most promising concept for low frequency (millihertz to hertz) gravitational wave observatories are laser interferometric detectors in space. It is usually assumed that the noise floor for such a detector is dominated by optical shot noise in the signal readout. For this to be true, a careful balance of mission parameters is crucial to keep all other parasitic disturbances below shot noise. We developed a web application that uses over 30 input parameters and considers many important technical noise sources and noise suppression techniques to derive a realistic position noise budget. It optimizes free parameters automatically and generates a detailed report on all individual noise contributions. Thus one can easily explore the entire parameter space and design a realistic gravitational wave observatory. In this document we describe the different parameters, present all underlying calculations, and compare the final observatory's sensitivity with astrophysical sources of gravitational waves. We use as an example parameters currently assumed to be likely applied to a space mission proposed to be launched in 2034 by the European Space Agency. The web application itself is publicly available on the Internet at http://spacegravity.org/designer. Future versions of the web application will incorporate the frequency dependence of different noise sources and include a more detailed model of the observatory's residual acceleration noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most significant research topics in computer vision is object detection. Most of the reported object detection results localise the detected object within a bounding box, but do not explicitly label the edge contours of the object. Since object contours provide a fundamental diagnostic of object shape, some researchers have initiated work on linear contour feature representations for object detection and localisation. However, linear contour feature-based localisation is highly dependent on the performance of linear contour detection within natural images, and this can be perturbed significantly by a cluttered background. In addition, the conventional approach to achieving rotation-invariant features is to rotate the feature receptive field to align with the local dominant orientation before computing the feature representation. Grid resampling after rotation adds extra computational cost and increases the total time consumption for computing the feature descriptor. Though it is not an expensive process if using current computers, it is appreciated that if each step of the implementation is faster to compute especially when the number of local features is increasing and the application is implemented on resource limited ”smart devices”, such as mobile phones, in real-time. Motivated by the above issues, a 2D object localisation system is proposed in this thesis that matches features of edge contour points, which is an alternative method that takes advantage of the shape information for object localisation. This is inspired by edge contour points comprising the basic components of shape contours. In addition, edge point detection is usually simpler to achieve than linear edge contour detection. Therefore, the proposed localization system could avoid the need for linear contour detection and reduce the pathological disruption from the image background. Moreover, since natural images usually comprise many more edge contour points than interest points (i.e. corner points), we also propose new methods to generate rotation-invariant local feature descriptors without pre-rotating the feature receptive field to improve the computational efficiency of the whole system. In detail, the 2D object localisation system is achieved by matching edge contour points features in a constrained search area based on the initial pose-estimate produced by a prior object detection process. The local feature descriptor obtains rotation invariance by making use of rotational symmetry of the hexagonal structure. Therefore, a set of local feature descriptors is proposed based on the hierarchically hexagonal grouping structure. Ultimately, the 2D object localisation system achieves a very promising performance based on matching the proposed features of edge contour points with the mean correct labelling rate of the edge contour points 0.8654 and the mean false labelling rate 0.0314 applied on the data from Amsterdam Library of Object Images (ALOI). Furthermore, the proposed descriptors are evaluated by comparing to the state-of-the-art descriptors and achieve competitive performances in terms of pose estimate with around half-pixel pose error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A dense grid of high- and very high resolution seismic data, together with piston cores and borehole data providing time constraints, enables us to reconstruct the history of the Bourcart canyon head in the western Mediterranean Sea during the last glacial/interglacial cycle. The canyon fill is composed of confined channel–levee systems fed by a series of successively active shelf fluvial systems, originating from the west and north. Most of the preserved infill corresponds to the interval between Marine Isotope Stage (MIS) 3 and the early deglacial (19 cal ka BP). Its deposition was strongly controlled by a relative sea level that impacted the direct fluvial/canyon connection. During a period of around 100 kyr between MIS 6 and MIS 2, the canyon “prograded” by about 3 km. More precisely, several parasequences can be identified within the canyon fill. They correspond to forced-regressed parasequences (linked to punctuated sea-level falls) topped by a progradational-aggradational parasequence (linked to a hypothetical 19-ka meltwater pulse (MWP)). The bounding surfaces between forced-regressed parasequences are condensed intervals formed during intervals of relative sediment starvation due to flooding episodes. The meandering pattern of the axial incision visible within the canyon head, which can be traced landward up to the Agly paleo-river, is interpreted as the result of hyperpycnal flows initiated in the river mouth in a context of increased rainfall and mountain glacier flushing during the early deglacial.