964 resultados para Fast track
Resumo:
The algorithm developed uses an octree pyramid in which noise is reduced at the expense of the spatial resolution. At a certain level an unsupervised clustering without spatial connectivity constraints is applied. After the classification, isolated voxels and insignificant regions are removed by assigning them to their neighbours. The spatial resolution is then increased by the downprojection of the regions, level by level. At each level the uncertainty of the boundary voxels is minimised by a dynamic selection and classification of these, using an adaptive 3D filtering. The algorithm is tested using different data sets, including NMR data.
Resumo:
Increasingly more applications in computer vision employ interest points. Algorithms like SIFT and SURF are all based on partial derivatives of images smoothed with Gaussian filter kemels. These algorithrns are fast and therefore very popular.
Resumo:
A feature detection system has been developed for real-time identification of lines, circles and people legs from laser range data. A new method sutable for arc/circle detection is proposed: the Inscribed Angle Variance (IAV). Lines are detected using a recursive line fitting method.
Resumo:
Tese de doutoramento (co-tutela), Psicologia (Psicologia da Educação), Faculdade de Psicologia da Universidade de Lisboa, Faculdade de Psicologia e de Ciências da Educação da Universidade de Coimbra, Technial University of Darmstadt, 2014
Resumo:
This paper describes in detail the design of a CMOS custom fast Fourier transform (FFT) processor for computing a 256-point complex FFT. The FFT is well-suited for real-time spectrum analysis in instrumentation and measurement applications. The FFT butterfly processor reported here consists of one parallel-parallel multiplier and two adders. It is capable of computing one butterfly computation every 100 ns thus it can compute a 256-point complex FFT in 102.4 μs excluding data input and output processes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
Central obesity is the hallmark of a number of non-inheritable disorders. The advent of imaging techniques such asMRI has allowed for a fast and accurate assessment of body fat content and distribution. However, image analysis continues to be one of the major obstacles to the use of MRI in large-scale studies. In this study we assess the validity of the recently proposed fat–muscle quantitation system (AMRATM Profiler) for the quantification of intra-abdominal adipose tissue (IAAT) and abdominal subcutaneous adipose tissue (ASAT) from abdominal MR images. Abdominal MR images were acquired from 23 volunteers with a broad range of BMIs and analysed using sliceOmatic, the current gold-standard, and the AMRATM Profiler based on a non-rigid image registration of a library of segmented atlases. The results show that there was a highly significant correlation between the fat volumes generated by the two analysis methods, (Pearson correlation r = 0.97, p < 0.001), with the AMRATM Profiler analysis being significantly faster (~3 min) than the conventional sliceOmatic approach (~40 min). There was also excellent agreement between the methods for the quantification of IAAT (AMRA 4.73 ± 1.99 versus sliceOmatic 4.73 ± 1.75 l, p = 0.97). For the AMRATM Profiler analysis, the intra-observer coefficient of variation was 1.6% for IAAT and 1.1% for ASAT, the inter-observer coefficient of variationwas 1.4%for IAAT and 1.2%for ASAT, the intra-observer correlationwas 0.998 for IAAT and 0.999 for ASAT, and the inter-observer correlation was 0.999 for both IAAT and ASAT. These results indicate that precise and accurate measures of body fat content and distribution can be obtained in a fast and reliable form by the AMRATM Profiler, opening up the possibility of large-scale human phenotypic studies.
Resumo:
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Resumo:
The top velocity of high-speed trains is generally limited by the ability to supply the proper amount of energy through the pantograph-catenary interface. The deterioration of this interaction can lead to the loss of contact, which interrupts the energy supply and originates arcing between the pantograph and the catenary, or to excessive contact forces that promote wear between the contacting elements. Another important issue is assessing on how the front pantograph influences the dynamic performance of the rear one in trainsets with two pantographs. In this work, the track and environmental conditions influence on the pantograph-catenary is addressed, with particular emphasis in the multiple pantograph operations. These studies are performed for high speed trains running at 300 km/h with relation to the separation between pantographs. Such studies contribute to identify the service conditions and the external factors influencing the contact quality on the overhead system. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Hand-off (or hand-over), the process where mobile nodes select the best access point available to transfer data, has been well studied in wireless networks. The performance of a hand-off process depends on the specific characteristics of the wireless links. In the case of low-power wireless networks, hand-off decisions must be carefully taken by considering the unique properties of inexpensive low-power radios. This paper addresses the design, implementation and evaluation of smart-HOP, a hand-off mechanism tailored for low-power wireless networks. This work has three main contributions. First, it formulates the hard hand-off process for low-power networks (such as typical wireless sensor networks - WSNs) with a probabilistic model, to investigate the impact of the most relevant channel parameters through an analytical approach. Second, it confirms the probabilistic model through simulation and further elaborates on the impact of several hand-off parameters. Third, it fine-tunes the most relevant hand-off parameters via an extended set of experiments, in a realistic experimental scenario. The evaluation shows that smart-HOP performs well in the transitional region while achieving more than 98 percent relative delivery ratio and hand-off delays in the order of a few tens of a milliseconds.
Resumo:
Zara was founded in 1975 by Amancio Ortega Gaona, soon becoming the largest and most successful chain of the Galician group Inditex (Industria de Diseño Textil) and a pioneer of the rising fashion category of Fast Fashion. Its innovative vertically-integrated strategies, combined with its emphasis on quality and demand-based offer have shaped the world of fashion and brought forth many questions on its future sustainability and growth. Zara has always relied on its store network for advertising its product offer; allowing its garments to “speak for themselves”. With the continued pressure felt in the industry, management has pressed some concerns about future company growth and creative, innovating solutions must be implemented to guarantee Zara’s future growth. The case-study narrative focuses on these issues and leaves readers with an open question regarding what decision to implement.
Resumo:
The year is 2015 and the startup and tech business ecosphere has never seen more activity. In New York City alone, the tech startup industry is on track to amass $8 billion dollars in total funding – the highest in 7 years (CB Insights, 2015). According to the Kauffman Index of Entrepreneurship (2015), this figure represents just 20% of the total funding in the United States. Thanks to platforms that link entrepreneurs with investors, there are simply more funding opportunities than ever, and funding can be initiated in a variety of ways (angel investors, venture capital firms, crowdfunding). And yet, in spite of all this, according to Forbes Magazine (2015), nine of ten startups will fail. Because of the unpredictable nature of the modern tech industry, it is difficult to pinpoint exactly why 90% of startups fail – but the general consensus amongst top tech executives is that “startups make products that no one wants” (Fortune, 2014). In 2011, author Eric Ries wrote a book called The Lean Startup in attempts to solve this all-too-familiar problem. It was in this book where he developed the framework for The Hypothesis-Driven Entrepreneurship Process, an iterative process that aims at proving a market before actually launching a product. Ries discusses concepts such as the Minimum Variable Product, the smallest set of activities necessary to disprove a hypothesis (or business model characteristic). Ries encourages acting briefly and often: if you are to fail, then fail fast. In today’s fast-moving economy, an entrepreneur cannot afford to waste his own time, nor his customer’s time. The purpose of this thesis is to conduct an in-depth of analysis of Hypothesis-Driven Entrepreneurship Process, in order to test market viability of a reallife startup idea, ShowMeAround. This analysis will follow the scientific Lean Startup approach; for the purpose of developing a functional business model and business plan. The objective is to conclude with an investment-ready startup idea, backed by rigorous entrepreneurial study.
Resumo:
Astrocytes are the most abundant glial cell type in the brain. Although not apposite for long-range rapid electrical communication, astrocytes share with neurons the capacity of chemical signaling via Ca(2+)-dependent transmitter exocytosis. Despite this recent finding, little is known about the specific properties of regulated secretion and vesicle recycling in astrocytes. Important differences may exist with the neuronal exocytosis, starting from the fact that stimulus-secretion coupling in astrocytes is voltage independent, mediated by G-protein-coupled receptors and the release of Ca(2+) from internal stores. Elucidating the spatiotemporal properties of astrocytic exo-endocytosis is, therefore, of primary importance for understanding the mode of communication of these cells and their role in brain signaling. We here take advantage of fluorescent tools recently developed for studying recycling of glutamatergic vesicles at synapses (Voglmaier et al., 2006; Balaji and Ryan, 2007); we combine epifluorescence and total internal reflection fluorescence imaging to investigate with unprecedented temporal and spatial resolution, the stimulus-secretion coupling underlying exo-endocytosis of glutamatergic synaptic-like microvesicles (SLMVs) in astrocytes. Our main findings indicate that (1) exo-endocytosis in astrocytes proceeds with a time course on the millisecond time scale (tau(exocytosis) = 0.24 +/- 0.017 s; tau(endocytosis) = 0.26 +/- 0.03 s) and (2) exocytosis is controlled by local Ca(2+) microdomains. We identified submicrometer cytosolic compartments delimited by endoplasmic reticulum tubuli reaching beneath the plasma membrane and containing SLMVs at which fast (time-to-peak, approximately 50 ms) Ca(2+) events occurred in precise spatial-temporal correlation with exocytic fusion events. Overall, the above characteristics of transmitter exocytosis from astrocytes support a role of this process in fast synaptic modulation.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
The purpose of this study is to examine whether Corporate Social Responsibility (CSR) announcements of the three biggest American fast food companies (McDonald’s, YUM! Brands and Wendy’s) have any effect on their stock returns as well as on the returns of the industry index (Dow Jones Restaurants and Bars). The time period under consideration starts on 1st of May 2001 and ends on 17th of October 2013. The stock market reaction is tested with an event study utilizing CAPM. The research employs the daily stock returns of the companies, the index and the benchmarks (NASDAQ and NYSE). The test of combined announcements did not reveal any significant effect on the index and McDonald’s. However the stock returns of Wendy’s and YUM! Brands reacted negatively. Moreover, the company level analyses showed that to their own CSR releases McDonald’s stock returns respond positively, YUM! Brands reacts negatively and Wendy’s does not have any reaction. Plus, it was found that the competitors of the announcing company tend to react negatively to all the events. Furthermore, the division of the events into sustainability categories showed statistically significant negative reaction from the Index, McDonald’s and YUM! Brands towards social announcements. At the same time only the index was positively affected by to the economic and environmental CSR news releases.