38 resultados para find it fast
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
A fast backward elimination algorithm is introduced based on a QR decomposition and Givens transformations to prune radial-basis-function networks. Nodes are sequentially removed using an increment of error variance criterion. The procedure is terminated by using a prediction risk criterion so as to obtain a model structure with good generalisation properties. The algorithm can be used to postprocess radial basis centres selected using a k-means routine and, in this mode, it provides a hybrid supervised centre selection approach.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Resumo:
Older adult computer users often lose track of the mouse cursor and so resort to methods such as mouse shaking or searching the screen to find the cursor again. Hence, this paper describes how a standard optical mouse was modified to include a touch sensor, activated by releasing and touching the mouse, which automatically centers the mouse cursor to the screen, potentially making it easier to find a 'lost' cursor. Six older adult computer users and six younger computer users were asked to compare the touch sensitive mouse with cursor centering with two alternative techniques for locating the mouse cursor: manually shaking the mouse and using the Windows sonar facility. The time taken to click on a target following a distractor task was recorded, and results show that centering the mouse was the fastest to use, with a 35% improvement over shaking the mouse. Five out of six older participants ranked the touch sensitive mouse with cursor centering as the easiest to use.
Resumo:
The adiabatic transit time of wave energy radiated by an Agulhas ring released in the South Atlantic Ocean to the North Atlantic Ocean is investigated in a two-layer ocean model. Of particular interest is the arrival time of baroclinic energy in the northern part of the Atlantic, because it is related to variations in the meridional overturning circulation. The influence of the Mid-Atlantic Ridge is also studied, because it allows for the conversion from barotropic to baroclinic wave energy and the generation of topographic waves. Barotropic energy from the ring is present in the northern part of the model basin within 10 days. From that time, the barotropic energy keeps rising to attain a maximum 500 days after initiation. This is independent of the presence or absence of a ridge in the model basin. Without a ridge in the model, the travel time of the baroclinic signal is 1300 days. This time is similar to the transit time of the ring from the eastern to the western coast of the model basin. In the presence of the ridge, the baroclinic signal arrives in the northern part of the model basin after approximately 10 days, which is the same time scale as that of the barotropic signal. It is apparent that the ridge can facilitate the energy conversion from barotropic to baroclinic waves and the slow baroclinic adjustment can be bypassed. The meridional overturning circulation, parameterized in two ways as either a purely barotropic or a purely baroclinic phenomenon, also responds after 1300 days. The ring temporarily increases the overturning strength. Th presence of the ridge does not alter the time scales.
Resumo:
The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.
Resumo:
The ever increasing demand for high image quality requires fast and efficient methods for noise reduction. The best-known order-statistics filter is the median filter. A method is presented to calculate the median on a set of N W-bit integers in W/B time steps. Blocks containing B-bit slices are used to find B-bits of the median; using a novel quantum-like representation allowing the median to be computed in an accelerated manner compared to the best-known method (W time steps). The general method allows a variety of designs to be synthesised systematically. A further novel architecture to calculate the median for a moving set of N integers is also discussed.
Resumo:
Climate models provide compelling evidence that if greenhouse gas emissions continue at present rates, then key global temperature thresholds (such as the European Union limit of two degrees of warming since pre-industrial times) are very likely to be crossed in the next few decades. However, there is relatively little attention paid to whether, should a dangerous temperature level be exceeded, it is feasible for the global temperature to then return to safer levels in a usefully short time. We focus on the timescales needed to reduce atmospheric greenhouse gases and associated temperatures back below potentially dangerous thresholds, using a state-of-the-art general circulation model. This analysis is extended with a simple climate model to provide uncertainty bounds. We find that even for very large reductions in emissions, temperature reduction is likely to occur at a low rate. Policy-makers need to consider such very long recovery timescales implicit in the Earth system when formulating future emission pathways that have the potential to 'overshoot' particular atmospheric concentrations of greenhouse gases and, more importantly, related temperature levels that might be considered dangerous.
Resumo:
Myostatin is a member of the transformating growth factor-_ (TGF-_) superfamily of proteins and is produced almost exclusively in skeletal muscle tissue, where it is secreted and circulates as a serum protein. Myostatin acts as a negative regulator of muscle mass through the canonical SMAD2/3/4 signaling pathway. Naturally occurring myostatin mutants exhibit a ‘double muscling’ phenotype in which muscle mass is dramatically increased as a result of both hypertrophy and hyperplasia. Myostatin is naturally inhibited by its own propeptide; therefore, we assessed the impact of adeno associated virus-8 (AAV8) myostatin propeptide vectors when systemically introduced in MF-1 mice. We noted a significant systemic increase in muscle mass in both slow and fast muscle phenotypes, with no evidence of hyperplasia; however, the nuclei-to- cytoplasm ratio in all myofiber types was significantly reduced. An increase in muscle mass in slow (soleus) muscle led to an increase in force output; however, an increase in fast (extensor digitorum longus [EDL]) muscle mass did not increase force output. These results suggest that the use of gene therapeutic regimens of myostatin inhibition for age-related or disease-related muscle loss may have muscle-specific effects.
Resumo:
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000- fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. Keywords: haldanes, biological time, scaling, pedomorphosis
Resumo:
Geophysical fluid models often support both fast and slow motions. As the dynamics are often dominated by the slow motions, it is desirable to filter out the fast motions by constructing balance models. An example is the quasi geostrophic (QG) model, which is used widely in meteorology and oceanography for theoretical studies, in addition to practical applications such as model initialization and data assimilation. Although the QG model works quite well in the mid-latitudes, its usefulness diminishes as one approaches the equator. Thus far, attempts to derive similar balance models for the tropics have not been entirely successful as the models generally filter out Kelvin waves, which contribute significantly to tropical low-frequency variability. There is much theoretical interest in the dynamics of planetary-scale Kelvin waves, especially for atmospheric and oceanic data assimilation where observations are generally only of the mass field and thus do not constrain the wind field without some kind of diagnostic balance relation. As a result, estimates of Kelvin wave amplitudes can be poor. Our goal is to find a balance model that includes Kelvin waves for planetary-scale motions. Using asymptotic methods, we derive a balance model for the weakly nonlinear equatorial shallow-water equations. Specifically we adopt the ‘slaving’ method proposed by Warn et al. (Q. J. R. Meteorol. Soc., vol. 121, 1995, pp. 723–739), which avoids secular terms in the expansion and thus can in principle be carried out to any order. Different from previous approaches, our expansion is based on a long-wave scaling and the slow dynamics is described using the height field instead of potential vorticity. The leading-order model is equivalent to the truncated long-wave model considered previously (e.g. Heckley & Gill, Q. J. R. Meteorol. Soc., vol. 110, 1984, pp. 203–217), which retains Kelvin waves in addition to equatorial Rossby waves. Our method allows for the derivation of higher-order models which significantly improve the representation of Rossby waves in the isotropic limit. In addition, the ‘slaving’ method is applicable even when the weakly nonlinear assumption is relaxed, and the resulting nonlinear model encompasses the weakly nonlinear model. We also demonstrate that the method can be applied to more realistic stratified models, such as the Boussinesq model.
Resumo:
Organisations need the right business and IT capabilities in order to achieve future business success. It follows that the sourcing of these capabilities is an important decision. Yet, there is a lack of consensus on the approach to decid-ing where and how to source the core operational capabilities. Furthermore, de-veloping its dynamic capability enables an organisation to effectively manage change its operational capabilities. Recent research has proposed that analysing business capabilities is a key pre-requisite to defining its Information Technology (IT) solutions. This research builds on these findings by considering the interde-pendencies between the dynamic business change capability and the sourcing of IT capabilities. Further it examines the decision-making oversight of these areas as implemented through IT governance. There is a good understanding of the direct impact of IT sourcing decision on operational capabilities However, there is a lack of research on the indirect impact to the capability of managing business change. Through a review of prior research and initial pilot field research, a capability framework and three main propositions are proposed, each examining a two-way interdependency. This paper describes the development of the integrated capa-bility framework and the rationale for the propositions. These respectively cover managing business change, IT sourcing and IT governance. Firstly, the sourcing of IT affects both the operational capabilities and the capability to manage business change. Similarly a business change may result in new or revised operational ca-pabilities, which can influence the IT sourcing decision resulting in a two-way rela-tionship. Secondly, this IT sourcing is directed under IT governance, which pro-vides a decision-making framework for the organisation. At the same time, the IT sourcing can have an impact on the IT governance capability, for example by out-sourcing key capabilities; hence this is potentially again a two-way relationship. Finally, there is a postulated two-way relationship between IT governance and managing business change in that IT governance provides an oversight of manag-ing business change through portfolio management while IT governance is a key element of the business change capability. Given the nature and novelty of this framework, a philosophical paradigm of constructivism is preferred. To illustrate and explore the theoretical perspectives provided, this paper reports on the find-ings of a case study incorporating eight high-level interviews with senior execu-tives in a German bank with 2300 employees. The collected data also include or-ganisational charts, annual reports, project and activity portfolio and benchmark reports for the IT budget. Recommendations are made for practitioners. An understanding of the interdependencies can support professionals in improving business success through effectively managing business change. Additionally, they can be assisted to evaluate the impact of IT sourcing decisions on the organisa-tion’s operational and dynamic capabilities, using an appropriate IT governance framework.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
Older adult computer users often lose track of the mouse cursor and so resort to methods such as shaking the mouse or searching the entire screen to find the cursor again. Hence, this paper describes how a standard optical mouse was modified to include a touch sensor, activated by releasing and touching the mouse, which automatically centers the mouse cursor to the screen, potentially making it easier to find a ‘lost’ cursor. Six older adult computer users and six younger computer users were asked to compare the touch sensitive mouse with cursor centering with two alternative techniques for locating the mouse cursor: manually shaking the mouse and using the Windows sonar facility. The time taken to click on a target after a distractor task was recorded, and results show that centering the mouse was the fastest to use, with a 35% improvement over shaking the mouse. Five out of six older participants ranked the touch sensitive mouse with cursor centering as the easiest to use.