891 resultados para Automatic forecasting


Relevância:

20.00% 20.00%

Publicador:

Resumo:

El problema de controlar les emissions de televisió digital a tota Europa pel desenvolupament de receptors robustos i fiables és cada vegada més significant, per això, sorgeix la necessitat d’automatitzar el procés d’anàlisi i control d’aquests senyals. Aquest projecte presenta el desenvolupament software d’una aplicació que vol solucionar una part d’aquest problema. L’aplicació s’encarrega d’analitzar, gestionar i capturar senyals de televisió digital. Aquest document fa una introducció a la matèria central que és la televisió digital i la informació que porten els senyals de televisió, concretament, la que es refereix a l’estàndard "Digital Video Broadcasting". A continuació d’aquesta part, l’escrit es concentra en l’explicació i descripció de les funcionalitats que necessita cobrir l'aplicació, així com introduir i explicar cada etapa d’un procés de desenvolupament software. Finalment, es resumeixen els avantatges de la creació d’aquest programa per l’automatització de l’anàlisi de senyal digital partint d’una optimització de recursos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years traditional inequality measures have been used to quite a considerable extent to examine the international distribution of environmental indicators. One of its main characteristics is that each one assigns different weights to the changes that occur in the different sections of the variable distribution and, consequently, the results they yield can potentially be very different. Hence, we suggest the appropriateness of using a range of well-recommended measures to achieve more robust results. We also provide an empirical test for the comparative behaviour of several suitable inequality measures and environmental indicators. Our findings support the hypothesis that in some cases there are differences among measures in both the sign of the evolution and its size. JEL codes: D39; Q43; Q56. Keywords: international environment factor distribution; Kaya factors; Inequality measurement

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Recently morphometric measurements of the ascending aorta have been done with ECG-gated MDCT to help the development of future endovascular therapies (TCT) [1]. However, the variability of these measurements remains unknown. It will be interesting to know the impact of CAD (computer aided diagnosis) with automated segmentation of the vessel and automatic measurements of diameter on the management of ascending aorta aneurysms. Methods and Materials: Thirty patients referred for ECG-gated CT thoracic angiography (64-row CT scanner) were evaluated. Measurements of the maximum and minimum ascending aorta diameters were obtained automatically with a commercially available CAD and semi-manually by two observers separately. The CAD algorithms segment the iv-enhanced lumen of the ascending aorta into perpendicular planes along the centreline. The CAD then determines the largest and the smallest diameters. Both observers repeated the automatic measurements and the semimanual measurements during a different session at least one month after the first measurements. The Bland and Altman method was used to study the inter/intraobserver variability. A Wilcoxon signed-rank test was also used to analyse differences between observers. Results: Interobserver variability for semi-manual measurements between the first and second observers was between 1.2 to 1.0 mm for maximal and minimal diameter, respectively. Intraobserver variability of each observer ranged from 0.8 to 1.2 mm, the lowest variability being produced by the more experienced observer. CAD variability could be as low as 0.3 mm, showing that it can perform better than human observers. However, when used in nonoptimal conditions (streak artefacts from contrast in the superior vena cava or weak lumen enhancement), CAD has a variability that can be as high as 0.9 mm, reaching variability of semi-manual measurements. Furthermore, there were significant differences between both observers for maximal and minimal diameter measurements (p<0.001). There was also a significant difference between the first observer and CAD for maximal diameter measurements with the former underestimating the diameter compared to the latter (p<0.001). As for minimal diameters, they were higher when measured by the second observer than when measured by CAD (p<0.001). Neither the difference of mean minimal diameter between the first observer and CAD nor the difference of mean maximal diameter between the second observer and CAD was significant (p=0.20 and 0.06, respectively). Conclusion: CAD algorithms can lessen the variability of diameter measurements in the follow-up of ascending aorta aneurysms. Nevertheless, in non-optimal conditions, it may be necessary to correct manually the measurements. Improvements of the algorithms will help to avoid such a situation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Law and science have partnered together in the recent past to solve major public health issues, ranging from asbestos to averting the threat of a nuclear holocaust. This paper travels to a legal and health policy frontier where no one has gone before, examining the role of precautionary principles under international law as a matter of codified international jurisprudence by examining draft terminology from prominent sources including the Royal Commission on Environmental Pollution (UK), the Swiss Confederation, the USA (NIOSH) and the OECD. The research questions addressed are how can the benefits of nanotechnology be realized, while minimizing the risk of harm? What law, if any, applies to protect consumers (who comprise the general public, nanotechnology workers and their corporate social partners) and other stakeholders within civil society from liability? What law, if any, applies to prevent harm?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. A software based tool has been developed (Optem) to allow automatize the recommendations of the Canadian Multiple Sclerosis Working Group for optimizing MS treatment in order to avoid subjective interpretation. METHODS: Treatment Optimization Recommendations (TORs) were applied to our database of patients treated with IFN beta1a IM. Patient data were assessed during year 1 for disease activity, and patients were assigned to 2 groups according to TOR: "change treatment" (CH) and "no change treatment" (NCH). These assessments were then compared to observed clinical outcomes for disease activity over the following years. RESULTS: We have data on 55 patients. The "change treatment" status was assigned to 22 patients, and "no change treatment" to 33 patients. The estimated sensitivity and specificity according to last visit status were 73.9% and 84.4%. During the following years, the Relapse Rate was always higher in the "change treatment" group than in the "no change treatment" group (5 y; CH: 0.7, NCH: 0.07; p < 0.001, 12 m - last visit; CH: 0.536, NCH: 0.34). We obtained the same results with the EDSS (4 y; CH: 3.53, NCH: 2.55, annual progression rate in 12 m - last visit; CH: 0.29, NCH: 0.13). CONCLUSION: Applying TOR at the first year of therapy allowed accurate prediction of continued disease activity in relapses and disability progression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Named entity recognizers are unable to distinguish if a term is a general concept as "scientist" or an individual as "Einstein". In this paper we explore the possibility to reach this goal combining two basic approaches: (i) Super Sense Tagging (SST) and (ii) YAGO. Thanks to these two powerful tools we could automatically create a corpus set in order to train the SuperSense Tagger. The general F1 is over 76% and the model is publicly available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a system for dynamic network resource configuration in environments with bandwidth reservation. The proposed system is completely distributed and automates the mechanisms for adapting the logical network to the offered load. The system is able to manage dynamically a logical network such as a virtual path network in ATM or a label switched path network in MPLS or GMPLS. The system design and implementation is based on a multi-agent system (MAS) which make the decisions of when and how to change a logical path. Despite the lack of a centralised global network view, results show that MAS manages the network resources effectively, reducing the connection blocking probability and, therefore, achieving better utilisation of network resources. We also include details of its architecture and implementation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Web-based tool developed to automatically correct relational database schemas is presented. This tool has been integrated into a more general e-learning platform and is used to reinforce teaching and learning on database courses. This platform assigns to each student a set of database problems selected from a common repository. The student has to design a relational database schema and enter it into the system through a user friendly interface specifically designed for it. The correction tool corrects the design and shows detected errors. The student has the chance to correct them and send a new solution. These steps can be repeated as many times as required until a correct solution is obtained. Currently, this system is being used in different introductory database courses at the University of Girona with very promising results

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In several countries, surveillance of insect vectors is accomplished with automatic traps. This study addressed the performance of Mosquito Magnet® Independence (MMI) in comparison with those of CDC with CO2 and lactic acid (CDC-A) and CDC light trap (CDC-LT). The collection sites were in a rural region located in a fragment of secondary tropical Atlantic rainforest, southeastern Brazil. Limatus durhami and Limatus flavisetosus were the dominant species in the MMI, whereas Ochlerotatus scapularis was most abundant in CDC-A. Culex ribeirensis and Culex sacchettae were dominant species in the CDC-LT. Comparisons among traps were based on diversity indices. Results from the diversity analyses showed that the MMI captured a higher abundance of mosquitoes and that the species richness estimated with it was higher than with CDC-LT. Contrasting, difference between MMI and CDC-A was not statistically significant. Consequently, the latter trap seems to be both an alternative for the MMI and complementary to it for ecological studies and entomological surveillance.