995 resultados para distances


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Opracowanie ma charakter teoretyczno-empiryczny. Jego głównym celem jest weryfikacja dwóch zasadniczych dla austriackiej teorii cyklu koniunkturalnego hipotez. Pierwsza hipoteza mówi, że restrykcyjna polityka pieniężna powoduje skracanie struktury produkcji, druga ¬– że podwyżki stóp prowadzą do względnych różnic w spadku produkcji branż wytwarzających dobra w różnych odległościach od konsumenta. Przeprowadzona w opracowaniu analiza, oparta na przyjętych wskaźnikach, pozwala pozytywnie zweryfikować obie hipotezy. Podwyżki stóp procentowych dokonane przez Narodowy Bank Polski prowadziły do powstawania spadkowych faz cyklu. W tym okresie w polskiej gospodarce zmniejszał się czas produkcji dóbr finalnych oraz występowały względne różnice w spadkach produkcji w branżach oddalonych w różnym stopniu od ostatecznego odbiorcy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This technical report presents a combined solution for two problems, one: tracking objects in 3D space and estimating their trajectories and second: computing the similarity between previously estimated trajectories and clustering them using the similarities that we just computed. For the first part, trajectories are estimated using an EKF formulation that will provide the 3D trajectory up to a constant. To improve accuracy, when occlusions appear, multiple hypotheses are followed. For the second problem we compute the distances between trajectories using a similarity based on LCSS formulation. Similarities are computed between projections of trajectories on coordinate axes. Finally we group trajectories together based on previously computed distances, using a clustering algorithm. To check the validity of our approach, several experiments using real data were performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a "good" provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is not strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common problem in many types of databases is retrieving the most similar matches to a query object. Finding those matches in a large database can be too slow to be practical, especially in domains where objects are compared using computationally expensive similarity (or distance) measures. This paper proposes a novel method for approximate nearest neighbor retrieval in such spaces. Our method is embedding-based, meaning that it constructs a function that maps objects into a real vector space. The mapping preserves a large amount of the proximity structure of the original space, and it can be used to rapidly obtain a short list of likely matches to the query. The main novelty of our method is that it constructs, together with the embedding, a query-sensitive distance measure that should be used when measuring distances in the vector space. The term "query-sensitive" means that the distance measure changes depending on the current query object. We report experiments with an image database of handwritten digits, and a time-series database. In both cases, the proposed method outperforms existing state-of-the-art embedding methods, meaning that it provides significantly better trade-offs between efficiency and retrieval accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present two algorithms for computing distances along a non-convex polyhedral surface. The first algorithm computes exact minimal-geodesic distances and the second algorithm combines these distances to compute exact shortest-path distances along the surface. Both algorithms have been extended to compute the exact minimalgeodesic paths and shortest paths. These algorithms have been implemented and validated on surfaces for which the correct solutions are known, in order to verify the accuracy and to measure the run-time performance, which is cubic or less for each algorithm. The exact-distance computations carried out by these algorithms are feasible for large-scale surfaces containing tens of thousands of vertices, and are a necessary component of near-isometric surface flattening methods that accurately transform curved manifolds into flat representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An improved Boundary Contour System (BCS) and Feature Contour System (FCS) neural network model of preattentive vision is applied to large images containing range data gathered by a synthetic aperture radar (SAR) sensor. The goal of processing is to make structures such as motor vehicles, roads, or buildings more salient and more interpretable to human observers than they are in the original imagery. Early processing by shunting center-surround networks compresses signal dynamic range and performs local contrast enhancement. Subsequent processing by filters sensitive to oriented contrast, including short-range competition and long-range cooperation, segments the image into regions. The segmentation is performed by three "copies" of the BCS and FCS, of small, medium, and large scales, wherein the "short-range" and "long-range" interactions within each scale occur over smaller or larger distances, corresponding to the size of the early filters of each scale. A diffusive filling-in operation within the segmented regions at each scale produces coherent surface representations. The combination of BCS and FCS helps to locate and enhance structure over regions of many pixels, without the resulting blur characteristic of approaches based on low spatial frequency filtering alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces an unsupervised neural architecture for the control of a mobile robot. The system allows incremental learning of the plant during robot operation, with robust performance despite unexpected changes of robot parameters such as wheel radius and inter-wheel distance. The model combines Vector associative Map (VAM) learning and associate learning, enabling the robot to reach targets at arbitrary distances without knowledge of the robot kinematics and without trajectory recording, but relating wheel velocities with robot movements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anisotropic specimens of MoS2 are obtained by pressing the microcrystalline powder into special die. This inelastic compression results in a rearrangement of the disulfide micro platelets observed by Atomic Force Microscopy and reflected in the macroscopic anisotropy in electrical conductivity in these samples. The conductivity measured parallel and perpendicular to the direction of applied pressure exhibits an anisotropy factor of ∼10 at 1 GPa. This behaviour of the conductivity as a function of applied pressure is explained as the result of the simultaneous influence of a rearrangement of the micro platelets in the solid and the change of the inter-grain distances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of ultra high speed (~20 Gsamples/s) analogue to digital converters (ADCs), and the delayed deployment of 40 Gbit/s transmission due to the economic downturn, has stimulated the investigation of digital signal processing (DSP) techniques for compensation of optical transmission impairments. In the future, DSP will offer an entire suite of tools to compensate for optical impairments and facilitate the use of advanced modulation formats. Chromatic dispersion is a very significant impairment for high speed optical transmission. This thesis investigates a novel electronic method of dispersion compensation which allows for cost-effective accurate detection of the amplitude and phase of the optical field into the radio frequency domain. The first electronic dispersion compensation (EDC) schemes accessed only the amplitude information using square law detection and achieved an increase in transmission distances. This thesis presents a method by using a frequency sensitive filter to estimate the phase of the received optical field and, in conjunction with the amplitude information, the entire field can be digitised using ADCs. This allows DSP technologies to take the next step in optical communications without requiring complex coherent detection. This is of particular of interest in metropolitan area networks. The full-field receiver investigated requires only an additional asymmetrical Mach-Zehnder interferometer and balanced photodiode to achieve a 50% increase in EDC reach compared to amplitude only detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated nanowire electrodes that permit direct, sensitive and rapid electrochemical based detection of chemical and biological species are a powerful emerging class of sensor devices. As critical dimensions of the electrodes enter the nanoscale, radial analyte diffusion profiles to the electrode dominate with a corresponding enhancement in mass transport, steady-state sigmoidal voltammograms, low depletion of target molecules and faster analysis. To optimise these sensors it is necessary to fully understand the factors that influence performance limits including: electrode geometry, electrode dimensions, electrode separation distances (within nanowire arrays) and diffusional mass transport. Therefore, in this thesis, theoretical simulations of analyte diffusion occurring at a variety of electrode designs were undertaken using Comsol Multiphysics®. Sensor devices were fabricated and corresponding experiments were performed to challenge simulation results. Two approaches for the fabrication and integration of metal nanowire electrodes are presented: Template Electrodeposition and Electron-Beam Lithography. These approaches allow for the fabrication of nanowires which may be subsequently integrated at silicon chip substrates to form fully functional electrochemical devices. Simulated and experimental results were found to be in excellent agreement validating the simulation model. The electrochemical characteristics exhibited by nanowire electrodes fabricated by electronbeam lithography were directly compared against electrochemical performance of a commercial ultra-microdisc electrode. Steady-state cyclic voltammograms in ferrocenemonocarboxylic acid at single ultra-microdisc electrodes were observed at low to medium scan rates (≤ 500 mV.s-1). At nanowires, steady-state responses were observed at ultra-high scan rates (up to 50,000 mV.s-1), thus allowing for much faster analysis (20 ms). Approaches for elucidating faradaic signal without the requirement for background subtraction were also developed. Furthermore, diffusional process occurring at arrays with increasing inter-electrode distance and increasing number of nanowires were explored. Diffusion profiles existing at nanowire arrays were simulated with Comsol Multiphysics®. A range of scan rates were modelled, and experiments were undertaken at 5,000 mV.s-1 since this allows rapid data capture required for, e.g., biomedical, environmental and pharmaceutical diagnostic applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymorphic microsatellite DNA loci were used here in three studies, one on Salmo salar and two on S. trutta. In the case of S. salar, the survival of native fish and non-natives from a nearby catchment, and their hybrids, were compared in a freshwater common garden experiment and subsequently in ocean ranching, with parental assignment utilising microsatellites. Overall survival of non-natives was 35% of natives. This differential survival was mainly in the oceanic phase. These results imply a genetic basis and suggest local adaptation can occur in salmonids across relatively small geographic distances which may have important implications for the management of salmon populations. In the first case study with S trutta, the species was investigated throughout its spread as an invasive in Newfoundland, eastern Canada. Genetic investigation confirmed historical records that the majority of introductions were from a Scottish hatchery and provided a clear example of the structure of two expanding waves of spread along coasts, probably by natural straying of anadromous individuals, to the north and south of the point of human introduction. This study showed a clearer example of the genetic anatomy of an invasion than in previous studies with brown trout, and may have implications for the management of invasive species in general. Finally, the genetics of anadromous S. trutta from the Waterville catchment in south western Ireland were studied. Two significantly different population groupings, from tributaries in geographically distinct locations entering the largest lake in the catchment, were identified. These results were then used to assign very large rod caught sea trout individuals (so called “specimen” sea trout) back to region of origin, in a Genetic Stock Identification exercise. This suggested that the majority of these large sea trout originated from one of the two tributary groups. These results are relevant for the understanding of sea trout population dynamics and for the future management of this and other sea trout producing catchments. This thesis has demonstrated new insights into the population structuring of salmonids both between and within catchments. While these chapters look at the existence and scale of genetic variation from different angles, it might be concluded that the overarching message from this thesis should be to highlight the importance of maintaining genetic diversity in salmonid populations as vital for their long-term productivity and resilience.