891 resultados para Automatic forecasting
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
During the eruption of Eyjafjallajökull in April and May 2010, the London Volcanic Ash Advisory Centre demonstrated the importance of infrared (IR) satellite imagery for monitoring volcanic ash and validating the Met Office operational model, NAME. This model is used to forecast ash dispersion and forms much of the basis of the advice given to civil aviation. NAME requires a source term describing the properties of the eruption plume at the volcanic source. Elements of the source term are often highly uncertain and significant effort has therefore been invested into the use of satellite observations of ash clouds to constrain them. This paper presents a data insertion method, where satellite observations of downwind ash clouds are used to create effective ‘virtual sources’ far from the vent. Uncertainty in the model output is known to increase over the duration of a model run, as inaccuracies in the source term, meteorological data and the parameterizations of the modelled processes accumulate. This new technique, where the dispersion model (DM) is ‘reinitialized’ part-way through a run, could go some way to addressing this.
Resumo:
The most damaging winds in a severe extratropical cyclone often occur just ahead of the evaporating ends of cloud filaments emanating from the so-called cloud head. These winds are associated with low-level jets (LLJs), sometimes occurring just above the boundary layer. The question then arises as to how the high momentum is transferred to the surface. An opportunity to address this question arose when the severe ‘St Jude's Day’ windstorm travelled across southern England on 28 October 2013. We have carried out a mesoanalysis of a network of 1 min resolution automatic weather stations and high-resolution Doppler radar scans from the sensitive S-band Chilbolton Advanced Meteorological Radar (CAMRa), along with satellite and radar network imagery and numerical weather prediction products. We show that, although the damaging winds occurred in a relatively dry region of the cyclone, there was evidence within the LLJ of abundant precipitation residues from shallow convective clouds that were evaporating in a localized region of descent. We find that pockets of high momentum were transported towards the surface by the few remaining actively precipitating convective clouds within the LLJ and also by precipitation-free convection in the boundary layer that was able to entrain evaporatively cooled air from the LLJ. The boundary-layer convection was organized in along-wind rolls separated by 500 to about 3000 m, the spacing varying according to the vertical extent of the convection. The spacing was greatest where the strongest winds penetrated to the surface. A run with a medium-resolution version of the Weather Research and Forecasting (WRF) model was able to reproduce the properties of the observed LLJ. It confirmed the LLJ to be a sting jet, which descended over the leading edge of a weaker cold-conveyor-belt jet.
Resumo:
On 23 November 1981, a strong cold front swept across the U.K., producing tornadoes from the west to the east coasts. An extensive campaign to collect tornado reports by the Tornado and Storm Research Organisation (TORRO) resulted in 104 reports, the largest U.K. outbreak. The front was simulated with a convection-permitting numerical model down to 200-m horizontal grid spacing to better understand its evolution and meteorological environment. The event was typical of tornadoes in the U.K., with convective available potential energy (CAPE) less than 150 J kg-1, 0-1-km wind shear of 10-20 m s-1, and a narrow cold-frontal rainband forming precipitation cores and gaps. A line of cyclonic absolute vorticity existed along the front, with maxima as large as 0.04 s-1. Some hook-shaped misovortices bore kinematic similarity to supercells. The narrow swath along which the line was tornadic was bounded on the equatorward side by weak vorticity along the line and on the poleward side by zero CAPE, enclosing a region where the environment was otherwise favorable for tornadogenesis. To determine if the 104 tornado reports were plausible, first possible duplicate reports were eliminated, resulting in as few as 58 tornadoes to as many as 90. Second, the number of possible parent misovortices that may have spawned tornadoes is estimated from model output. The number of plausible tornado reports in the 200-m grid-spacing domain was 22 and as many as 44, whereas the model simulation was used to estimate 30 possible parent misovortices within this domain. These results suggest that 90 reports was plausible.
Resumo:
The Plant–Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant–Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant–Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Resumo:
There is evidence that automatic visual attention favors the right side. This study investigated whether this lateral asymmetry interacts with the right hemisphere dominance for visual location processing and left hemisphere dominance for visual shape processing. Volunteers were tested in a location discrimination task and a shape discrimination task. The target stimuli (S2) could occur in the left or right hemifield. They were preceded by an ipsilateral, contralateral or bilateral prime stimulus (S1). The attentional effect produced by the right S1 was larger than that produced by the left S1. This lateral asymmetry was similar between the two tasks suggesting that the hemispheric asymmetries of visual mechanisms do not contribute to it. The finding that it was basically due to a longer reaction time to the left S2 than to the right S2 for the contralateral S1 condition suggests that the inhibitory component of attention is laterally asymmetric.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
An entropy-based image segmentation approach is introduced and applied to color images obtained from Google Earth. Segmentation refers to the process of partitioning a digital image in order to locate different objects and regions of interest. The application to satellite images paves the way to automated monitoring of ecological catastrophes, urban growth, agricultural activity, maritime pollution, climate changing and general surveillance. Regions representing aquatic, rural and urban areas are identified and the accuracy of the proposed segmentation methodology is evaluated. The comparison with gray level images revealed that the color information is fundamental to obtain an accurate segmentation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This work describes a novel methodology for automatic contour extraction from 2D images of 3D neurons (e.g. camera lucida images and other types of 2D microscopy). Most contour-based shape analysis methods cannot be used to characterize such cells because of overlaps between neuronal processes. The proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking stage iteratively labels all valid pixel of branches, tip to a critical region, where it determines the suitable direction to proceed. Finally, the labeled skeleton segments are followed in order to yield the parametric contour of the neuronal shape under analysis. The reported system was successfully tested with respect to several images and the results from a set of three neuron images are presented here, each pertaining to a different class, i.e. alpha, delta and epsilon ganglion cells, containing a total of 34 crossings. The algorithms successfully got across all these overlaps. The method has also been found to exhibit robustness even for images with close parallel segments. The proposed method is robust and may be implemented in an efficient manner. The introduction of this approach should pave the way for more systematic application of contour-based shape analysis methods in neuronal morphology. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768 x 576 with several moving objects at about 11 fps. (C)2011 Elsevier Ltd. All rights reserved.
Resumo:
This article describes the integration of the LSD (Logic for Structure Determination) and SISTEMAT expert systems that were both designed for the computer-assisted structure elucidation of small organic molecules. A first step has been achieved towards the linking of the SISTEMAT database with the LSD structure generator. The skeletal descriptions found by the SISTEMAT programs are now easily transferred to LSD as substructural constraints. Examples of the synergy between these expert systems are given for recently reported natural products.
Resumo:
This paper describes an automatic device for in situ and continuous monitoring of the ageing process occurring in natural and synthetic resins widely used in art and in the conservation and restoration of cultural artefacts. The results of tests carried out under accelerated ageing conditions are also presented. This easy-to-assemble palm-top device, essentially consists of oscillators based on quartz crystal resonators coated with films of the organic materials whose response to environmental stress is to be addressed. The device contains a microcontroller which selects at pre-defined time intervals the oscillators and records and stores their oscillation frequency. The ageing of the coatings, caused by the environmental stress and resulting in a shift in the oscillation frequency of the modified crystals, can be straightforwardly monitored in this way. The kinetics of this process reflects the level of risk damage associated with a specific microenvironment. In this case, natural and artificial resins, broadly employed in art and restoration of artistic and archaeological artefacts (dammar and Paraloid B72), were applied onto the crystals. The environmental stress was represented by visible and UV radiation, since the chosen materials are known to be photochemically active, to different extents. In the case of dammar, the results obtained are consistent with previous data obtained using a bench-top equipment by impedance analysis through discrete measurements and confirm that the ageing of this material is reflected in the gravimetric response of the modified quartz crystals. As for Paraloid B72, the outcome of the assays indicates that the resin is resistant to visible light, but is very sensitive to UV irradiation. The use of a continuous monitoring system, apart from being obviously more practical, is essential to identify short-term (i.e. reversible) events, like water vapour adsorption/desorption processes, and to highlight ageing trends or sudden changes of such trends. (C) 2007 Elsevier B.V. All rights reserved.