980 resultados para Source wavelet estimation
Resumo:
The comparison of cancer prevalence with cancer mortality can lead under some hypotheses to an estimate of registration rate. A method is proposed, where the cases with cancer as a cause of death are divided into 3 categories: (1) cases already known by the registry (2) unknown cases having occured before the registry creation date (3) unknown cases occuring during the registry operates. The estimate is then the number of cases in the first category divided by the total of those in categories 1 and 3 (these only are to be registered). An application is performed on the data of the Canton de Vaud. Survival rates of the Norvegian Cancer Registry are used for computing the number of unknown cases to be included in second and third category, respectively. The discussion focusses on the possible determinants of the obtained comprehensiveness rates for various cancer sites.
Resumo:
This paper presents a very fine grid hydrological model based on the spatiotemporal repartition of precipitation and on the topography. The goal is to estimate the flood on a catchment area, using a Probable Maximum Precipitation (PMP) leading to a Probable Maximum Flood (PMF). The spatiotemporal distribution of the precipitation was realized using six clouds modeled by the advection-diffusion equation. The equation shows the movement of the clouds over the terrain and also gives the evolution of the rain intensity in time. This hydrological modeling is followed by a hydraulic modeling of the surface and subterranean flows, done considering the factors that contribute to the hydrological cycle, such as the infiltration, the exfiltration and the snowmelt. This model was applied to several Swiss basins using measured rain, with results showing a good correlation between the simulated and observed flows. This good correlation proves that the model is valid and gives us the confidence that the results can be extrapolated to phenomena of extreme rainfall of PMP type. In this article we present some results obtained using a PMP rainfall and the developed model.
Resumo:
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Resumo:
Chromosomal anomalies, like Robertsonian and reciprocal translocations represent a big problem in cattle breeding as their presence induces, in the carrier subjects, a well documented fertility reduction. In cattle reciprocal translocations (RCPs, a chromosome abnormality caused by an exchange of material between nonhomologous chromosomes) are considered rare as to date only 19 reciprocal translocations have been described. In cattle it is common knowledge that the Robertsonian translocations represent the most common cytogenetic anomalies, and this is probably due to the existence of the endemic 1;29 Robertsonian translocation. However, these considerations are based on data obtained using techniques that are unable to identify all reciprocal translocations and thus their frequency is clearly underestimated. The purpose of this work is to provide a first realistic estimate of the impact of RCPs in the cattle population studied, trying to eliminate the factors which have caused an underestimation of their frequency so far. We performed this work using a mathematical as well as a simulation approach and, as biological data, we considered the cytogenetic results obtained in the last 15 years. The results obtained show that only 16% of reciprocal translocations can be detected using simple Giemsa techniques and consequently they could be present in no less than 0,14% of cattle subjects, a frequency five times higher than that shown by de novo Robertsonian translocations. This data is useful to open a debate about the need to introduce a more efficient method to identify RCP in cattle.
Resumo:
A parametric procedure for the blind inversion of nonlinear channels is proposed, based on a recent method of blind source separation in nonlinear mixtures. Experiments show that the proposed algorithms perform efficiently, even in the presence of hard distortion. The method, based on the minimization of the output mutual information, needs the knowledge of log-derivative of input distribution (the so-called score function). Each algorithm consists of three adaptive blocks: one devoted to adaptive estimation of the score function, and two other blocks estimating the inverses of the linear and nonlinear parts of the channel, (quasi-)optimally adapted using the estimated score functions. This paper is mainly concerned by the nonlinear part, for which we propose two parametric models, the first based on a polynomial model and the second on a neural network, while [14, 15] proposed non-parametric approaches.
Resumo:
La formació de traductors implica l´ús de procediments i eines que permetin els estudiants familiaritzar-se amb contextos professionals. El software lliure especialitzat inclou eines de qualitat professional i procediments accessibles per a les institucions acadèmiques i els estudiants a distància que treballen a casa seva. Els projectes reals que utilitzen software lliure i traducció col·laborativa (crowdsourcing) constitueixen recursos indispensables en la formació de traductors.
Resumo:
EEG recordings are usually corrupted by spurious extra-cerebral artifacts, which should be rejected or cleaned up by the practitioner. Since manual screening of human EEGs is inherently error prone and might induce experimental bias, automatic artifact detection is an issue of importance. Automatic artifact detection is the best guarantee for objective and clean results. We present a new approach, based on the time–frequency shape of muscular artifacts, to achieve reliable and automatic scoring. The impact of muscular activity on the signal can be evaluated using this methodology by placing emphasis on the analysis of EEG activity. The method is used to discriminate evoked potentials from several types of recorded muscular artifacts—with a sensitivity of 98.8% and a specificity of 92.2%. Automatic cleaning ofEEGdata are then successfully realized using this method, combined with independent component analysis. The outcome of the automatic cleaning is then compared with the Slepian multitaper spectrum based technique introduced by Delorme et al (2007 Neuroimage 34 1443–9).
Resumo:
It is well known the relationship between source separation and blind deconvolution: If a filtered version of an unknown i.i.d. signal is observed, temporal independence between samples can be used to retrieve the original signal, in the same manner as spatial independence is used for source separation. In this paper we propose the use of a Genetic Algorithm (GA) to blindly invert linear channels. The use of GA is justified in the case of small number of samples, where other gradient-like methods fails because of poor estimation of statistics.
Resumo:
Although sources in general nonlinear mixturm arc not separable iising only statistical independence, a special and realistic case of nonlinear mixtnres, the post nonlinear (PNL) mixture is separable choosing a suited separating system. Then, a natural approach is based on the estimation of tho separating Bystem parameters by minimizing an indcpendence criterion, like estimated mwce mutual information. This class of methods requires higher (than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior, like source temporal correlation or nonstationarity, leads to other source separation Jgw rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works with second-order statistics. Recently, modeling time correlated s011rces by Markov models, we propose vcry efficient algorithms hmed on minimization of the conditional mutual information. Currently, using the prior of temporally correlated sources, we investigate the fesihility of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models and algorithms, and finish with advanced resutts using temporally correlated snu~sm
Resumo:
In this paper we present a method for blind deconvolution of linear channels based on source separation techniques, for real word signals. This technique applied to blind deconvolution problems is based in exploiting not the spatial independence between signals but the temporal independence between samples of the signal. Our objective is to minimize the mutual information between samples of the output in order to retrieve the original signal. In order to make use of use this idea the input signal must be a non-Gaussian i.i.d. signal. Because most real world signals do not have this i.i.d. nature, we will need to preprocess the original signal before the transmission into the channel. Likewise we should assure that the transmitted signal has non-Gaussian statistics in order to achieve the correct function of the algorithm. The strategy used for this preprocessing will be presented in this paper. If the receiver has the inverse of the preprocess, the original signal can be reconstructed without the convolutive distortion.
Resumo:
Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of 450-600 A degrees C has been estimated based on pseudosection calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D-0= 5.7*10(-6) m(2)/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (600 A degrees C) = 4.0*10(-22) m(2)/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, T-c, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) < D (Nd) < D (Sm) a parts per thousand currency sign D (Lu). T-c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 A degrees C, assuming a rapid initial exhumation rate of 50A degrees/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 A degrees C must be interpreted in terms of prograde metamorphism.
Resumo:
The aim of this study was to compare the diagnostic value of post-mortem computed tomography angiography (PMCTA) to conventional, ante-mortem computed tomography (CT)-scan, CT-angiography (CTA) and digital subtraction angiography (DSA) in the detection and localization of the source of bleeding in cases of acute hemorrhage with fatal outcomes. The medical records and imaging scans of nine individuals who underwent a conventional, ante-mortem CT-scan, CTA or DSA and later died in the hospital as a result of an acute hemorrhage were reviewed. Post-mortem computed tomography angiography, using multi-phase post-mortem CTA, as well as medico-legal autopsies were performed. Localization accuracy of the bleeding was assessed by comparing the diagnostic findings of the different techniques. The results revealed that data from ante-mortem and post-mortem radiological examinations were similar, though the PMCTA showed a higher sensitivity for detecting the hemorrhage source than did ante-mortem radiological investigations. By comparing the results of PMCTA and conventional autopsy, much higher sensitivity was noted in PMCTA in identifying the source of the bleeding. In fact, the vessels involved were identified in eight out of nine cases using PMCTA and only in three cases through conventional autopsy. Our study showed that PMCTA, similar to clinical radiological investigations, is able to precisely identify lesions of arterial and/or venous vessels and thus determine the source of bleeding in cases of acute hemorrhages with fatal outcomes.
Resumo:
A clear and rigorous definition of muscle moment-arms in the context of musculoskeletal systems modelling is presented, using classical mechanics and screw theory. The definition provides an alternative to the tendon excursion method, which can lead to incorrect moment-arms if used inappropriately due to its dependency on the choice of joint coordinates. The definition of moment-arms, and the presented construction method, apply to musculoskeletal models in which the bones are modelled as rigid bodies, the joints are modelled as ideal mechanical joints and the muscles are modelled as massless, frictionless cables wrapping over the bony protrusions, approximated using geometric surfaces. In this context, the definition is independent of any coordinate choice. It is then used to solve a muscle-force estimation problem for a simple 2D conceptual model and compared with an incorrect application of the tendon excursion method. The relative errors between the two solutions vary between 0% and 100%.
Resumo:
Although extensive research has been conducted on urban freeway capacity estimation methods, minimal research has been carried out for rural highway sections, especially sections within work zones. This study attempted to fill that void for rural highways in Kansas, by estimating capacity of rural highway work zones in Kansas. Six work zone locations were selected for data collection and further analysis. An average of six days’ worth of field data was collected, from mid-October 2013 to late November 2013, at each of these work zone sites. Two capacity estimation methods were utilized, including the Maximum Observed 15-minute Flow Rate Method and the Platooning Method divided into 15-minute intervals. The Maximum Observed 15-minute Flow Rate Method provided an average capacity of 1469 passenger cars per hour per lane (pcphpl) with a standard deviation of 141 pcphpl, while the Platooning Method provided a maximum average capacity of 1195 pcphpl and a standard deviation of 28 pcphpl. Based on observed data and analysis carried out in this study, the suggested maximum capacity can be considered as 1500 pcphpl when designing work zones for rural highways in Kansas. This proposed standard value of rural highway work zone capacity could be utilized by engineers and planners so that they can effectively mitigate congestion at or near work zones that would have otherwise occurred due to construction/maintenance.