914 resultados para search and matching
Resumo:
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory to study the universe in the very-high-energy domain. The observatory will rely on a Science Alert Generation (SAG) system to analyze the real-time data from the telescopes and generate science alerts. The SAG system will play a crucial role in the search and follow-up of transients from external alerts, enabling multi-wavelength and multi-messenger collaborations. It will maximize the potential for the detection of the rarest phenomena, such as gamma-ray bursts (GRBs), which are the science case for this study. This study presents an anomaly detection method based on deep learning for detecting gamma-ray burst events in real-time. The performance of the proposed method is evaluated and compared against the Li&Ma standard technique in two use cases of serendipitous discoveries and follow-up observations, using short exposure times. The method shows promising results in detecting GRBs and is flexible enough to allow real-time search for transient events on multiple time scales. The method does not assume background nor source models and doe not require a minimum number of photon counts to perform analysis, making it well-suited for real-time analysis. Future improvements involve further tests, relaxing some of the assumptions made in this study as well as post-trials correction of the detection significance. Moreover, the ability to detect other transient classes in different scenarios must be investigated for completeness. The system can be integrated within the SAG system of CTA and deployed on the onsite computing clusters. This would provide valuable insights into the method's performance in a real-world setting and be another valuable tool for discovering new transient events in real-time. Overall, this study makes a significant contribution to the field of astrophysics by demonstrating the effectiveness of deep learning-based anomaly detection techniques for real-time source detection in gamma-ray astronomy.
Resumo:
Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.
Resumo:
In many developing countries, clusters of small shops are the typical market-place. We investigate an economic model in which, between buyers and sellers in a marketplace, a circular causality including the search process produces agglomeration forces, given the initial location of the marketplace location exogenously in a linear city. We conclude that initial number of buyers and sellers is important in forming a large marketplace.
Resumo:
We develop a neoclassical trade model with heterogeneous factors of production. We consider a world with two factors, labor and .managers., each with a distribution of ability levels. Production combines a manager of some type with a group of workers. The output of a unit depends on the types of the two factors, with complementarity between them, while exhibiting diminishing returns to the number of workers. We examine the sorting of factors to sectors and the matching of factors within sectors, and we use the model to study the determinants of the trade pattern and the effects of trade on the wage and salary distributions. Finally, we extend the model to include search frictions and consider the distribution of employment rates.
Resumo:
We consider a frictional two-sided matching market in which one side uses public cheap talk announcements so as to attract the other side. We show that if the first-price auction is adopted as the trading protocol, then cheap talk can be perfectly informative, and the resulting market outcome is efficient, constrained only by search frictions. We also show that the performance of an alternative trading protocol in the cheap-talk environment depends on the level of price dispersion generated by the protocol: If a trading protocol compresses (spreads) the distribution of prices relative to the first-price auction, then an efficient fully revealing equilibrium always (never) exists. Our results identify the settings in which cheap talk can serve as an efficient competitive instrument, in the sense that the central insights from the literature on competing auctions and competitive search continue to hold unaltered even without ex ante price commitment.
Resumo:
A welfare analysis of unemployment insurance (UI) is performed in a generalequilibrium job search model. Finitely-lived, risk-averse workers smooth consumption over time by accumulating assets, choose search effort whenunemployed, and suffer disutility from work. Firms hire workers, purchasecapital, and pay taxes to finance worker benefits; their equity is the assetaccumulated by workers. A matching function relates unemployment, hiringexpenditure, and search effort to the formation of jobs. The model is calibrated to US data; the parameters relating job search effort to the probability of job finding are chosen to match microeconomic studies ofunemployment spells. Under logarithmic utility, numerical simulation shows rather small welfaregains from UI. Even without UI, workers smooth consumption effectivelythrough asset accumulation. Greater risk aversion leads to substantiallylarger welfare gains from UI; however, even in this case much of its welfareimpact is due not to consumption smoothing effects, but rather to decreased work disutility, or to a variety of externalities.
Resumo:
We consider two–sided many–to–many matching markets in which each worker may work for multiple firms and each firm may hire multiple workers. We study individual and group manipulations in centralized markets that employ (pairwise) stable mechanisms and that require participants to submit rank order lists of agents on the other side of the market. We are interested in simple preference manipulations that have been reported and studied in empirical and theoretical work: truncation strategies, which are the lists obtained by removing a tail of least preferred partners from a preference list, and the more general dropping strategies, which are the lists obtained by only removing partners from a preference list (i.e., no reshuffling). We study when truncation / dropping strategies are exhaustive for a group of agents on the same side of the market, i.e., when each match resulting from preference manipulations can be replicated or improved upon by some truncation / dropping strategies. We prove that for each stable mechanism, truncation strategies are exhaustive for each agent with quota 1 (Theorem 1). We show that this result cannot be extended neither to group manipulations (even when all quotas equal 1 – Example 1), nor to individual manipulations when the agent’s quota is larger than 1 (even when all other agents’ quotas equal 1 – Example 2). Finally, we prove that for each stable mechanism, dropping strategies are exhaustive for each group of agents on the same side of the market (Theorem 2), i.e., independently of the quotas.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
A expansão da tríplice continência em unidades com quatro ou mais elementos abriu novas perspectivas para a compreensão de comportamentos complexos, como a emergência de respostas que derivam da formação de classes de estímulos equivalentes e que modelam comportamentos simbólicos e conceituais. Na investigação experimental, o procedimento de matching to sample tem sido frequentemente empregado para estabelecer discriminações condicionais. Em particular, a obtenção do matching de identidade generalizado é considerada demonstrativa da aquisição dos conceitos de igualdade e diferença. Segundo argumentamos, o fato de se buscar a compreensão desses conceitos a partir de processos discriminativos condicionais pode ter sido responsável pelos frequentes fracassos em demonstrá-los em sujeitos não humanos. A falta de correspondência entre os processos discriminativos responsáveis por estabelecer a relação de reflexividade entre estímulos que formam classes equivalentes e o matching de identidade generalizado, nesse sentido, é aqui revista ao longo de estudos empíricos e discutida com respeito às suas implicações.
Resumo:
We searched for a sidereal modulation in the MINOS far detector neutrino rate. Such a signal would be a consequence of Lorentz and CPT violation as described by the standard-model extension framework. It also would be the first detection of a perturbative effect to conventional neutrino mass oscillations. We found no evidence for this sidereal signature, and the upper limits placed on the magnitudes of the Lorentz and CPT violating coefficients describing the theory are an improvement by factors of 20-510 over the current best limits found by using the MINOS near detector.
Resumo:
In this article, we evaluate the use of simple Lee-Goldburg cross-polarization (LG-CP) NMR experiments for obtaining quantitative information of molecular motion in the intermediate regime. In particular, we introduce the measurement of Hartmann-Hahn matching profiles for the assessment of heteronuclear dipolar couplings as well as dynamics as a reliable and robust alternative to the more common analysis of build-up curves. We have carried out dynamic spin dynamics simulations in order to test the method's sensitivity to intermediate motion and address its limitations concerning possible experimental imperfections. We further demonstrate the successful use of simple theoretical concepts, most prominently Anderson-Weiss (AW) theory, to analyze the data. We further propose an alternative way to estimate activation energies of molecular motions, based upon the acquisition of only two LG-CP spectra per temperature at different temperatures. As experimental tests, molecular jumps in imidazole methyl sulfonate, trimethylsulfoxonium iodide, and bisphenol A polycarbonate were investigated with the new method.
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.
Resumo:
We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in ""stable"" and ""unstable"" ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is Suitable for finding many query images in A, multi-scale searching and partial occlusion-robust template matching. (C) 2009 Elsevier Ltd. All rights reserved.
Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research
Resumo:
For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares the performances of several digital tools with traditional library resources. While new specialised search engines and open access digital repositories may fill a gap between conventional search engines and traditional references, these should be not be confused with real libraries and international scientific databases that encompass textbooks and peer-reviewed scholarly works. An absence of listing in some Internet search listings, databases and repositories is not an indication of standing. Researchers, engineers and academics should remember these key differences in assessing the quality of bibliographic "research" based solely upon Internet searches.
Resumo:
Published in the final months of 1891, Architecture, Mysticism and Myth was the first architectural treatise written by the late nineteenth-century English architect and theorist William Richard Lethaby (1857-1931).' Documenting the characteristic attributes of the architectural myth of the "temple idea", and its presence amongst architectures of multiple ancient cultures, the text was endowed with a distinctly historical tone. In examining the motives behind myth, which Lethaby defined as the interaction and reaction between the natural universe and the built environment, Lethaby also injected a series of theoretical considerations into the text. It is clear that Lethaby's interest in the temple idea was not limited to its curious, prolific presence in past architectures, hut also embraced a consideration of what lessons the temple idea may contribute to the struggle of the late nineteenth-century English architect to define an "art of the future".