837 resultados para semi binary based feature detectordescriptor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stratospheric ozone is of major interest as it absorbs most harmful UV radiation from the sun, allowing life on Earth. Ground-based microwave remote sensing is the only method that allows for the measurement of ozone profiles up to the mesopause, over 24 hours and under different weather conditions with high time resolution. In this paper a novel ground-based microwave radiometer is presented. It is called GROMOS-C (GRound based Ozone MOnitoring System for Campaigns), and it has been designed to measure the vertical profile of ozone distribution in the middle atmosphere by observing ozone emission spectra at a frequency of 110.836 GHz. The instrument is designed in a compact way which makes it transportable and suitable for outdoor use in campaigns, an advantageous feature that is lacking in present day ozone radiometers. It is operated through remote control. GROMOS-C is a total power radiometer which uses a pre-amplified heterodyne receiver, and a digital fast Fourier transform spectrometer for the spectral analysis. Among its main new features, the incorporation of different calibration loads stands out; this includes a noise diode and a new type of blackbody target specifically designed for this instrument, based on Peltier elements. The calibration scheme does not depend on the use of liquid nitrogen; therefore GROMOS-C can be operated at remote places with no maintenance requirements. In addition, the instrument can be switched in frequency to observe the CO line at 115 GHz. A description of the main characteristics of GROMOS-C is included in this paper, as well as the results of a first campaign at the High Altitude Research Station at Jungfraujoch (HFSJ), Switzerland. The validation is performed by comparison of the retrieved profiles against equivalent profiles from MLS (Microwave Limb Sounding) satellite data, ECMWF (European Centre for Medium-Range Weather Forecast) model data, as well as our nearby NDACC (Network for the Detection of Atmospheric Composition Change) ozone radiometer measuring at Bern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Index tracking has become one of the most common strategies in asset management. The index-tracking problem consists of constructing a portfolio that replicates the future performance of an index by including only a subset of the index constituents in the portfolio. Finding the most representative subset is challenging when the number of stocks in the index is large. We introduce a new three-stage approach that at first identifies promising subsets by employing data-mining techniques, then determines the stock weights in the subsets using mixed-binary linear programming, and finally evaluates the subsets based on cross validation. The best subset is returned as the tracking portfolio. Our approach outperforms state-of-the-art methods in terms of out-of-sample performance and running times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Anxiety disorders have been linked to an increased risk of incident coronary heart disease in which inflammation plays a key pathogenic role. To date, no studies have looked at the association between proinflammatory markers and agoraphobia. METHODS In a random Swiss population sample of 2890 persons (35-67 years, 53% women), we diagnosed a total of 124 individuals (4.3%) with agoraphobia using a validated semi-structured psychiatric interview. We also assessed socioeconomic status, traditional cardiovascular risk factors (i.e., body mass index, hypertension, blood glucose levels, total cholesterol/high-density lipoprotein-cholesterol ratio), and health behaviors (i.e., smoking, alcohol consumption, and physical activity), and other major psychiatric diseases (other anxiety disorders, major depressive disorder, drug dependence) which were treated as covariates in linear regression models. Circulating levels of inflammatory markers, statistically controlled for the baseline demographic and health-related measures, were determined at a mean follow-up of 5.5 ± 0.4 years (range 4.7 - 8.5). RESULTS Individuals with agoraphobia had significantly higher follow-up levels of C-reactive protein (p = 0.007) and tumor-necrosis-factor-α (p = 0.042) as well as lower levels of the cardioprotective marker adiponectin (p = 0.032) than their non-agoraphobic counterparts. Follow-up levels of interleukin (IL)-1β and IL-6 did not significantly differ between the two groups. CONCLUSIONS Our results suggest an increase in chronic low-grade inflammation in agoraphobia over time. Such a mechanism might link agoraphobia with an increased risk of atherosclerosis and coronary heart disease, and needs to be tested in longitudinal studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediments can act as long-term sinks for environmental pollutants. Within the past decades, dioxin-like compounds (DLCs) such as polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), polychlorinated biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs) have attracted significant attention in the scientific community. To investigate the time- and concentration-dependent uptake of DLCs and PAHs in rainbow trout (Oncorhynchus mykiss) and their associated toxicological effects, we conducted exposure experiments using suspensions of three field-collected sediments from the rivers Rhine and Elbe, which were chosen to represent different contamination levels. Five serial dilutions of contaminated sediments were tested; these originated from the Prossen and Zollelbe sampling sites (both in the river Elbe, Germany) and from Ehrenbreitstein (Rhine, Germany), with lower levels of contamination. Fish were exposed to suspensions of these dilutions under semi-static conditions for 90 days. Analysis of muscle tissue by high resolution gas chromatography and mass spectrometry and of bile liquid by high-performance liquid chromatography showed that particle-bound PCDD/Fs, PCBs and PAHs were readily bioavailable from re-suspended sediments. Uptake of these contaminants and the associated toxicological effects in fish were largely proportional to their sediment concentrations. The changes in the investigated biomarkers closely reflected the different sediment contamination levels: cytochrome P450 1A mRNA expression and 7-ethoxyresorufin-O-deethylase activity in fish livers responded immediately and with high sensitivity, while increased frequencies of micronuclei and other nuclear aberrations, as well as histopathological and gross pathological lesions, were strong indicators of the potential long-term effects of re-suspension events. Our study clearly demonstrates that sediment re-suspension can lead to accumulation of PCDD/Fs and PCBs in fish, resulting in potentially adverse toxicological effects. For a sound risk assessment within the implementation of the European Water Framework Directive and related legislation, we propose a strong emphasis on sediment-bound contaminants in the context of integrated river basin management plans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Logistic regression is one of the most important tools in the analysis of epidemiological and clinical data. Such data often contain missing values for one or more variables. Common practice is to eliminate all individuals for whom any information is missing. This deletion approach does not make efficient use of available information and often introduces bias.^ Two methods were developed to estimate logistic regression coefficients for mixed dichotomous and continuous covariates including partially observed binary covariates. The data were assumed missing at random (MAR). One method (PD) used predictive distribution as weight to calculate the average of the logistic regressions performing on all possible values of missing observations, and the second method (RS) used a variant of resampling technique. Additional seven methods were compared with these two approaches in a simulation study. They are: (1) Analysis based on only the complete cases, (2) Substituting the mean of the observed values for the missing value, (3) An imputation technique based on the proportions of observed data, (4) Regressing the partially observed covariates on the remaining continuous covariates, (5) Regressing the partially observed covariates on the remaining continuous covariates conditional on response variable, (6) Regressing the partially observed covariates on the remaining continuous covariates and response variable, and (7) EM algorithm. Both proposed methods showed smaller standard errors (s.e.) for the coefficient involving the partially observed covariate and for the other coefficients as well. However, both methods, especially PD, are computationally demanding; thus for analysis of large data sets with partially observed covariates, further refinement of these approaches is needed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medical errors and close calls are pervasive in health care. It is hypothesized that the causes of close calls are the same as for medical errors; therefore learning about close calls can help prevent errors and increase patient safety. Yet despite efforts to encourage close call reporting, close calls as well as medical errors are under-reported in health care. The purpose of this dissertation was to implement and evaluate a web-based anonymous close call reporting system in three units at an urban hospital. ^ The study participants were physicians, nurses and medical technicians (N = 187) who care for patients in the Medical Intermediate Care Unit, the Surgical Intermediate Care Unit, and the Coronary Catheterization Laboratory in the hospital. We provided educational information to the participants on how to use the system and e-mailed and delivered paper reminders to report to the participants throughout the 19-month project. We surveyed the participants at the beginning and at the end of the study to assess their attitudes and beliefs regarding incident reporting. We found that the majority of the health care providers in our study are supportive of incident reporting in general but in practice very few had actually reported an error or a close call, semi-structured interview 20 weeks after we made the close call reporting system available. The purpose of the interviews was to further assess the participants' attitudes regarding incident reporting and the reporting system. Our findings suggest that the health care providers are supportive of medical error reporting in general, but are not convinced of the benefit of reporting close calls. Barriers to close call reporting cited include lack of time, heavy workloads, preferring to take care of close calls "on the spot", and not seeing the benefits of close call reporting. Consequently only two = close calls were reported via the system by two separate caregivers during the project. ^ The findings suggest that future efforts to increase close call reporting must address barriers to reporting, especially the belief among care givers that it is not worth taking time from their already busy schedules to report close calls. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combination of two research projects offered us the opportunity to perform a comprehensive study of the seasonal evolution of the hydrological structure and the circulation of the North Aegean Sea, at the northern extremes of the eastern Mediterranean. The combination of brackish water inflow from the Dardanelles and the sea-bottom relief dictate the significant differences between the North and South Aegean water columns. The relatively warm and highly saline South Aegean waters enter the North Aegean through the dominant cyclonic circulation of the basin. In the North Aegean, three layers of distinct water masses of very different properties are observed: The 20-50 m thick surface layer is occupied mainly by Black Sea Water, modified on its way through the Bosphorus, the Sea of Marmara and the Dardanelles. Below the surface layer there is warm and highly saline water originating in the South Aegean and the Levantine, extending down to 350-400 m depth. Below this layer, the deeper-than-400 m basins of the North Aegean contain locally formed, very dense water with different i/S characteristics at each subbasin. The circulation is characterised by a series of permanent, semi-permanent and transient mesoscale features, overlaid on the general slow cyclonic circulation of the Aegean. The mesoscale activity, while not necessarily important in enhancing isopycnal mixing in the region, in combination with the very high stratification of the upper layers, however, increases the residence time of the water of the upper layers in the general area of the North Aegean. As a result, water having out-flowed from the Black Sea in the winter, forms a separate distinct layer in the region in spring (lying between "younger" BSW and the Levantine origin water), and is still traceable in the water column in late summer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of sea-ice leads represents a key feature of the Arctic sea ice cover. Leads promote the flux of sensible and latent heat from the ocean to the cold winter atmosphere and are thereby crucial for air-sea-ice-ocean interactions. We here apply a binary segmentation procedure to identify leads from MODIS thermal infrared imagery on a daily time scale. The method separates identified leads into two uncertainty categories, with the high uncertainty being attributed to artifacts that arise from warm signatures of unrecognized clouds. Based on the obtained lead detections, we compute quasi-daily pan-Arctic lead maps for the months of January to April, 2003-2015. Our results highlight the marginal ice zone in the Fram Strait and Barents Sea as the primary region for lead activity. The spatial distribution of the average pan-Arctic lead frequencies reveals, moreover, distinct patterns of predominant fracture zones in the Beaufort Sea and along the shelf-breaks, mainly in the Siberian sector of the Arctic Ocean as well as the well-known polynya and fast-ice locations. Additionally, a substantial inter-annual variability of lead occurrences in the Arctic is indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The early to mid-Holocene thermal optimum is a well-known feature in a wide variety of paleoclimate archives from the Northern Hemisphere. Reconstructed summer temperature anomalies from across northern Europe show a clear maximum around 6000 years before present (6 ka). For the marine realm, Holocene trends in sea-surface temperature reconstructions for the North Atlantic and Norwegian Sea do not exhibit a consistent pattern of early to mid- Holocene warmth. Sea-surface temperature records based on alkenones and diatoms generally show the existence of a warm early to mid-Holocene optimum. In contrast, several foraminifer and radiolarian based temperature records from the North Atlantic and Norwegian Sea show a cool mid- Holocene anomaly and a trend towards warmer temperatures in the late Holocene. In this paper, we revisit the foraminifer record from the Vøring Plateau in the Norwegian Sea. We also compare this record with published foraminifer based temperature reconstructions from the North Atlantic and with modelled (CCSM3) upper ocean temperatures. Model results indicate that while the seasonal summer warming of the seasurface was stronger during the mid-Holocene, sub-surface depths experienced a cooling. This hydrographic setting can explain the discrepancies between the Holocene trends exhibited by phytoplankton and zooplankton based temperature proxy records.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this chapter is to study feature extraction and pattern classification methods from two medical areas, Stabilometry and Electroencephalography (EEG). Stabilometry is the branch of medicine responsible for examining balance in human beings. Balance and dizziness disorders are probably two of the most common illnesses that physicians have to deal with. In Stabilometry, the key nuggets of information in a time series signal are concentrated within definite time periods are known as events. In this chapter, two feature extraction schemes have been developed to identify and characterise the events in Stabilometry and EEG signals. Based on these extracted features, an Adaptive Fuzzy Inference Neural network has been applied for classification of Stabilometry and EEG signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.