878 resultados para Vision-based row tracking algorithm
Resumo:
We present an assessment of how tropical cyclone activity might change due to the influence of increased atmospheric carbon dioxide concentrations, using the UK’s High Resolution Global Environment Model (HiGEM) with N144 resolution (~90 km in the atmosphere and ~40 km in the ocean). Tropical cyclones are identified using a feature tracking algorithm applied to model output. Tropical cyclones from idealized 30-year 2×CO2 (2CO2) and 4×CO2 (4CO2) simulations are compared to those identified in a 150-year present-day simulation, which is separated into a 5-member ensemble of 30-year integrations. Tropical cyclones are shown to decrease in frequency globally by 9% in the 2CO2 and 26% in the 4CO2. Tropical cyclones only become more intese in the 4CO2, however uncoupled time slice experiments reveal an increase in intensity in the 2CO2. An investigation into the large-scale environmental conditions, known to influence tropical cyclone activity in the main development regions, is used to determine the response of tropical cyclone activity to increased atmospheric CO2. A weaker Walker circulation and a reduction in zonally averaged regions of updrafts lead to a shift in the location of tropical cyclones in the northern hemisphere. A decrease in mean ascent at 500 hPa contributes to the reduction of tropical cyclones in the 2CO2 in most basins. The larger reduction of tropical cyclones in the 4CO2 arises from further reduction of mean ascent at 500 hPa and a large enhancement of vertical wind shear, especially in the southern hemisphere, North Atlantic and North East Pacific.
Resumo:
The ability of the climate models participating in phase 5 of the Coupled Model Intercomparison Project (CMIP5) to simulate North Atlantic extratropical cyclones in winter [December–February (DJF)] and summer [June–August (JJA)] is investigated in detail. Cyclones are identified as maxima in T42 vorticity at 850 hPa and their propagation is tracked using an objective feature-tracking algorithm. By comparing the historical CMIP5 simulations (1976–2005) and the ECMWF Interim Re-Analysis (ERA-Interim; 1979–2008), the authors find that systematic biases affect the number and intensity of North Atlantic cyclones in CMIP5 models. In DJF, the North Atlantic storm track tends to be either too zonal or displaced southward, thus leading to too few and weak cyclones over the Norwegian Sea and too many cyclones in central Europe. In JJA, the position of the North Atlantic storm track is generally well captured but some CMIP5 models underestimate the total number of cyclones. The dynamical intensity of cyclones, as measured by either T42 vorticity at 850 hPa or mean sea level pressure, is too weak in both DJF and JJA. The intensity bias has a hemispheric character, and it cannot be simply attributed to the representation of the North Atlantic large- scale atmospheric state. Despite these biases, the representation of Northern Hemisphere (NH) storm tracks has improved since CMIP3 and some CMIP5 models are able of representing well both the number and the intensity of North Atlantic cyclones. In particular, some of the higher-atmospheric-resolution models tend to have a better representation of the tilt of the North Atlantic storm track and of the intensity of cyclones in DJF.
Resumo:
The response of North Atlantic and European extratropical cyclones to climate change is investigated in the climate models participating in phase 5 of the Coupled Model Intercomparison Project (CMIP5). In contrast to previous multimodel studies, a feature-tracking algorithm is here applied to separately quantify the re- sponses in the number, the wind intensity, and the precipitation intensity of extratropical cyclones. Moreover, a statistical framework is employed to formally assess the uncertainties in the multimodel projections. Under the midrange representative concentration pathway (RCP4.5) emission scenario, the December–February (DJF) response is characterized by a tripolar pattern over Europe, with an increase in the number of cyclones in central Europe and a decreased number in the Norwegian and Mediterranean Seas. The June–August (JJA) response is characterized by a reduction in the number of North Atlantic cyclones along the southern flank of the storm track. The total number of cyclones decreases in both DJF (24%) and JJA (22%). Classifying cyclones according to their intensity indicates a slight basinwide reduction in the number of cy- clones associated with strong winds, but an increase in those associated with strong precipitation. However, in DJF, a slight increase in the number and intensity of cyclones associated with strong wind speeds is found over the United Kingdom and central Europe. The results are confirmed under the high-emission RCP8.5 scenario, where the signals tend to be larger. The sources of uncertainty in these projections are discussed.
Resumo:
There are a range of studies based in the low carbon arena which use various ‘futures’- based techniques as ways of exploring uncertainties. These techniques range from ‘scenarios’ and ‘roadmaps’ through to ‘transitions’ and ‘pathways’ as well as ‘vision’-based techniques. The overall aim of the paper is therefore to compare and contrast these techniques to develop a simple working typology with the further objective of identifying the implications of this analysis for RETROFIT 2050. Using recent examples of city-based and energy-based studies throughout, the paper compares and contrasts these techniques and finds that the distinctions between them have often been blurred in the field of low carbon. Visions, for example, have been used in both transition theory and futures/Foresight methods, and scenarios have also been used in transition-based studies as well as futures/Foresight studies. Moreover, Foresight techniques which capture expert knowledge and map existing knowledge to develop a set of scenarios and roadmaps which can inform the development of transitions and pathways can not only help potentially overcome any ‘disconnections’ that may exist between the social and the technical lenses in which such future trajectories are mapped, but also promote a strong ‘co-evolutionary’ content.
Resumo:
ERA-Interim reanalysis data from the past 35 years have been used with a newly-developed feature tracking algorithm to identify Indian monsoon depressions originating in or near the Bay of Bengal. These were then rotated, centralised and combined to give a fully three-dimensional 106-depression composite structure – a considerably larger sample than any previous detailed study on monsoon depressions and their structure. Many known features of depression structure are confirmed, particularly the existence of a maximum to the southwest of the centre in rainfall and other fields, and a westward axial tilt in others. Additionally, the depressions are found to have significant asymmetry due to the presence of the Himalayas; a bimodal mid-tropospheric potential vorticity core; a separation into thermally cold- (~–1.5K) and neutral- (~0K) cores near the surface with distinct properties; and that the centre has very large CAPE and very small CIN. Variability as a function of background state has also been explored, with land/coast/sea, diurnal, ENSO, active/break and Indian Ocean Dipole contrasts considered. Depressions are found to be markedly stronger during the active phase of the monsoon, as well as during La Niña. Depressions on land are shown to be more intense and more tightly constrained to the central axis. A detailed schematic diagram of a vertical cross-section through a composite depression is also presented, showing its inherent asymmetric structure.
Resumo:
One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Wooden railway sleeper inspections in Sweden are currently performed manually by a human operator; such inspections are based on visual analysis. Machine vision based approach has been done to emulate the visual abilities of human operator to enable automation of the process. Through this process bad sleepers are identified, and a spot is marked on it with specific color (blue in the current case) on the rail so that the maintenance operators are able to identify the spot and replace the sleeper. The motive of this thesis is to help the operators to identify those sleepers which are marked by color (spots), using an “Intelligent Vehicle” which is capable of running on the track. Capturing video while running on the track and segmenting the object of interest (spot) through this vehicle; we can automate this work and minimize the human intuitions. The video acquisition process depends on camera position and source light to obtain fine brightness in acquisition, we have tested 4 different types of combinations (camera position and source light) here to record the video and test the validity of proposed method. A sequence of real time rail frames are extracted from these videos and further processing (depending upon the data acquisition process) is done to identify the spots. After identification of spot each frame is divided in to 9 regions to know the particular region where the spot lies to avoid overlapping with noise, and so on. The proposed method will generate the information regarding in which region the spot lies, based on nine regions in each frame. From the generated results we have made some classification regarding data collection techniques, efficiency, time and speed. In this report, extensive experiments using image sequences from particular camera are reported and the experiments were done using intelligent vehicle as well as test vehicle and the results shows that we have achieved 95% success in identifying the spots when we use video as it is, in other method were we can skip some frames in pre-processing to increase the speed of video but the segmentation results we reduced to 85% and the time was very less compared to previous one. This shows the validity of proposed method in identification of spots lying on wooden railway sleepers where we can compromise between time and efficiency to get the desired result.
Resumo:
This paper presents a computer-vision based marker-free method for gait-impairment detection in Patients with Parkinson's disease (PWP). The system is based upon the idea that a normal human body attains equilibrium during the gait by aligning the body posture with Axis-of-Gravity (AOG) using feet as the base of support. In contrast, PWP appear to be falling forward as they are less-able to align their body with AOG due to rigid muscular tone. A normal gait exhibits periodic stride-cycles with stride-angle around 45o between the legs, whereas PWP walk with shortened stride-angle with high variability between the stride-cycles. In order to analyze Parkinsonian-gait (PG), subjects were videotaped with several gait-cycles. The subject's body was segmented using a color-segmentation method to form a silhouette. The silhouette was skeletonized for motion cues extraction. The motion cues analyzed were stride-cycles (based on the cyclic leg motion of skeleton) and posture lean (based on the angle between leaned torso of skeleton and AOG). Cosine similarity between an imaginary perfect gait pattern and the subject gait patterns produced 100% recognition rate of PG for 4 normal-controls and 3 PWP. Results suggested that the method is a promising tool to be used for PG assessment in home-environment.
Resumo:
In this paper a state of the art of a system of automated deduction called SAD is described . An architecture of SAD corresponds well to a modern vision of the Evidence Algorithm programme, initiated by Academician V.Glushkov.
Resumo:
The pumping of fluids in pipelines is the most economic and safe form of transporting fluids. That explains why in Europe there was in 1999 about 30.000 Km [7] of pipelines of several diameters, transporting millíons of cubic meters of crude oil end refined products, belonging to COCAWE (assaciation of companies of petroleum of Europe for health, environment and safety, that joint several petroleum companies). In Brazil they are about 18.000 Km of pipelines transporting millions of cubic meters of liquids and gases. In 1999, nine accidents were registered to COCAWE. Among those accidents one brought a fatal victim. The oil loss was of 171 m3, equivalent to O,2 parts per million of the total of the transported volume. Same considering the facts mentioned the costs involved in ao accident can be high. An accident of great proportions can bríng loss of human lives, severe environmental darnages, loss of drained product, loss . for dismissed profit and damages to the image of the company high recovery cost. In consonance with that and in some cases for legal demands, the companies are, more and more, investing in systems of Leak detection in pipelines based on computer algorithm that operate in real time, seeking wíth that to minimize still more the drained volumes. This decreases the impacts at the environment and the costs. In general way, all the systems based on softWare present some type of false alarm. In general a commitment exists betWeen the sensibílity of the system and the number of false alarms. This work has as objective make a review of thé existent methods and to concentrate in the analysis of a specific system, that is, the system based on hydraulic noise, Pressure Point Analyzis (PPA). We will show which are the most important aspects that must be considered in the implementation of a Leak Detection System (LDS), from the initial phase of the analysis of risks passing by the project bases, design, choice of the necessary field instrumentation to several LDS, implementation and tests. We Will make na analysis of events (noises) originating from the flow system that can be generator of false alarms and we will present a computer algorithm that restricts those noises automatically
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms
Resumo:
We propose in this work a software architecture for robotic boats intended to act in diverse aquatic environments, fully autonomously, performing telemetry to a base station and getting this mission to be accomplished. This proposal aims to apply within the project N-Boat Lab NatalNet DCA, which aims to empower a sailboat navigating autonomously. The constituent components of this architecture are the memory modules, strategy, communication, sensing, actuation, energy, security and surveillance, making these systems the boat and base station. To validate the simulator was developed in C language and implemented using the graphics API OpenGL resources, whose main results were obtained in the implementation of memory, performance and strategy modules, more specifically data sharing, control of sails and rudder and planning short routes based on an algorithm for navigation, respectively. The experimental results, shown in this study indicate the feasibility of the actual use of the software architecture developed and their application in the area of autonomous mobile robotics
Resumo:
The study of aerodynamic loading variations has many engineering applications, including helicopter rotor blades, wind turbines and turbo machinery. This work uses a Vortex Method to make a lagrangian description of the a twodimensional airfoil/ incident wake vortex interaction. The flow is incompressible, newtonian, homogeneus and the Reynolds Number is 5x105 .The airfoil is a NACA 0018 placed a angle of attack of the 0° and 5°simulates with the Painel Method with a constant density vorticity panels and a generation poit is near the painel. The protector layer is created does not permit vortex inside the body. The vortex Lamb convection is realized with the Euler Method (first order) and Adans-Bashforth (second order). The Random Walk Method is used to simulate the diffusion. The circular wake has 366 vortex all over positive or negative vorticity located at different heights with respect to the airfoil chord. The Lift was calculated based in the algorithm created by Ricci (2002). This simulation uses a ready algorithm vatidated with single body does not have a incident wake. The results are compared with a experimental work The comparasion concludes that the experimental results has a good agrement with this papper