970 resultados para frequency analysis problem
Resumo:
We present a new global method for the identification of hotspots in conservation and ecology. The method is based on the identification of spatial structure properties through cumulative relative frequency distributions curves, and is tested with two case studies, the identification of fish density hotspots and terrestrial vertebrate species diversity hotspots. Results from the frequency distribution method are compared with those from standard techniques among local, partially local and global methods. Our approach offers the main advantage to be independent from the selection of any threshold, neighborhood, or other parameter that affect most of the currently available methods for hotspot analysis. The two case studies show how such elements of arbitrariness of the traditional methods influence both size and location of the identified hotspots, and how this new global method can be used for a more objective selection of hotspots.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
Left rostral dorsal premotor cortex (rPMd) and supramarginal gyrus (SMG) have been implicated in the dynamic control of actions. In 12 right-handed healthy individuals, we applied 30 min of low-frequency (1 Hz) repetitive transcranial magnetic stimulation (rTMS) over left rPMd to investigate the involvement of left rPMd and SMG in the rapid adjustment of actions guided by visuospatial cues. After rTMS, subjects underwent functional magnetic resonance imaging while making spatially congruent button presses with the right or left index finger in response to a left- or right-sided target. Subjects were asked to covertly prepare motor responses as indicated by a directional cue presented 1 s before the target. On 20% of trials, the cue was invalid, requiring subjects to readjust their motor plan according to the target location. Compared with sham rTMS, real rTMS increased the number of correct responses in invalidly cued trials. After real rTMS, task-related activity of the stimulated left rPMd showed increased task-related coupling with activity in ipsilateral SMG and the adjacent anterior intraparietal area (AIP). Individuals who showed a stronger increase in left-hemispheric premotor-parietal connectivity also made fewer errors on invalidly cued trials after rTMS. The results suggest that rTMS over left rPMd improved the ability to dynamically adjust visuospatial response mapping by strengthening left-hemispheric connectivity between rPMd and the SMG-AIP region. These results support the notion that left rPMd and SMG-AIP contribute toward dynamic control of actions and demonstrate that low-frequency rTMS can enhance functional coupling between task-relevant brain regions and improve some aspects of motor performance.
Resumo:
Background: The 22q11.2 deletion syndrome is the most frequent genomic disorder with an estimated frequency of 1/4000 live births. The majority of patients (90%) have the same deletion of 3 Mb (Typically Deleted Region, TDR) that results from aberrant recombination at meiosis between region specific low-copy repeats (LCRs). Methods: As a first step towards the characterization of recombination rates and breakpoints within the 22q11.2 region we have constructed a high resolution recombination breakpoint map based on pedigree analysis and a population-based historical recombination map based on LD analysis. Results: Our pedigree map allows the location of recombination breakpoints with a high resolution (potential recombination hotspots), and this approach has led to the identification of 5 breakpoint segments of 50 kb or less (8.6 kb the smallest), that coincide with historical hotspots. It has been suggested that aberrant recombination leading to deletion (and duplication) is caused by low rates of Allelic Homologous Recombination (AHR) within the affected region. However, recombination rate estimates for 22q11.2 region show that neither average recombination rates in the 22q11.2 region or within LCR22-2 (the LCR implicated in most deletions and duplications), are significantly below chromosome 22 averages. Furthermore, LCR22-2, the repeat most frequently implicated in rearrangements, is also the LCR22 with the highest levels of AHR. In addition, we find recombination events in the 22q11.2 region to cluster within families. Within this context, the same chromosome recombines twice in one family; first by AHR and in the next generation by NAHR resulting in an individual affected with the del22q11.2 syndrome. Conclusion: We show in the context of a first high resolution pedigree map of the 22q11.2 region that NAHR within LCR22 leading to duplications and deletions cannot be explained exclusively under a hypothesis of low AHR rates. In addition, we find that AHR recombination events cluster within families. If normal and aberrant recombination are mechanistically related, the fact that LCR22s undergo frequent AHR and that we find familial differences in recombination rates within the 22q11.2 region would have obvious health-related implications.
Resumo:
Analyzing the type and frequency of patient-specific mutations that give rise to Duchenne muscular dystrophy (DMD) is an invaluable tool for diagnostics, basic scientific research, trial planning, and improved clinical care. Locus-specific databases allow for the collection, organization, storage, and analysis of genetic variants of disease. Here, we describe the development and analysis of the TREAT-NMD DMD Global database (http://umd.be/TREAT_DMD/). We analyzed genetic data for 7,149 DMD mutations held within the database. A total of 5,682 large mutations were observed (80% of total mutations), of which 4,894 (86%) were deletions (1 exon or larger) and 784 (14%) were duplications (1 exon or larger). There were 1,445 small mutations (smaller than 1 exon, 20% of all mutations), of which 358 (25%) were small deletions and 132 (9%) small insertions and 199 (14%) affected the splice sites. Point mutations totalled 756 (52% of small mutations) with 726 (50%) nonsense mutations and 30 (2%) missense mutations. Finally, 22 (0.3%) mid-intronic mutations were observed. In addition, mutations were identified within the database that would potentially benefit from novel genetic therapies for DMD including stop codon read-through therapies (10% of total mutations) and exon skipping therapy (80% of deletions and 55% of total mutations).
Resumo:
Background: Studies conducted internationally confirm that child sexual abuse is a much more widespread problem than previously thought, with even the lowest prevalence rates including a large number of victims that need to be taken into account. Objective: To carry out a meta-analysis of the prevalence of child sexual abuse in order to establish an overall international figure. Methods: Studies were retrieved from various electronic databases. The measure of interest was the prevalence of abuse reported in each article, these values being combined via a random effects model. A detailed analysis was conducted of the effects of various moderator variables. Results: Sixty-five articles covering 22 countries were included. The analysis showed that 7.9% of men (7.4% without outliers) and 19.7% of women (19.2% without outliers) had suffered some form of sexual abuse prior to the age of eighteen. Conclusions: The results of the present meta-analysis indicate that child sexual abuse is a serious problem in the countries analysed.
Resumo:
OBJECTIVE: In contrast to conventional (CONV) neuromuscular electrical stimulation (NMES), the use of "wide-pulse, high-frequencies" (WPHF) can generate higher forces than expected by the direct activation of motor axons alone. We aimed at investigating the occurrence, magnitude, variability and underlying neuromuscular mechanisms of these "Extra Forces" (EF). METHODS: Electrically-evoked isometric plantar flexion force was recorded in 42 healthy subjects. Additionally, twitch potentiation, H-reflex and M-wave responses were assessed in 13 participants. CONV (25Hz, 0.05ms) and WPHF (100Hz, 1ms) NMES consisted of five stimulation trains (20s on-90s off). RESULTS: K-means clustering analysis disclosed a responder rate of almost 60%. Within this group of responders, force significantly increased from 4% to 16% of the maximal voluntary contraction force and H-reflexes were depressed after WPHF NMES. In contrast, non-responders showed neither EF nor H-reflex depression. Twitch potentiation and resting EMG data were similar between groups. Interestingly, a large inter- and intrasubject variability of EF was observed. CONCLUSION: The responder percentage was overestimated in previous studies. SIGNIFICANCE: This study proposes a novel methodological framework for unraveling the neurophysiological mechanisms involved in EF and provides further evidence for a central contribution to EF in responders.
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Flood simulation studies use spatial-temporal rainfall data input into distributed hydrological models. A correct description of rainfall in space and in time contributes to improvements on hydrological modelling and design. This work is focused on the analysis of 2-D convective structures (rain cells), whose contribution is especially significant in most flood events. The objective of this paper is to provide statistical descriptors and distribution functions for convective structure characteristics of precipitation systems producing floods in Catalonia (NE Spain). To achieve this purpose heavy rainfall events recorded between 1996 and 2000 have been analysed. By means of weather radar, and applying 2-D radar algorithms a distinction between convective and stratiform precipitation is made. These data are introduced and analyzed with a GIS. In a first step different groups of connected pixels with convective precipitation are identified. Only convective structures with an area greater than 32 km2 are selected. Then, geometric characteristics (area, perimeter, orientation and dimensions of the ellipse), and rainfall statistics (maximum, mean, minimum, range, standard deviation, and sum) of these structures are obtained and stored in a database. Finally, descriptive statistics for selected characteristics are calculated and statistical distributions are fitted to the observed frequency distributions. Statistical analyses reveal that the Generalized Pareto distribution for the area and the Generalized Extreme Value distribution for the perimeter, dimensions, orientation and mean areal precipitation are the statistical distributions that best fit the observed ones of these parameters. The statistical descriptors and the probability distribution functions obtained are of direct use as an input in spatial rainfall generators.
Resumo:
Tutkimus tarkastelee Luoteis-Venäjän liikennelogistiikkaklusteria. Tarkoitus on selvittää klusterin nykyinen rakenne ja kilpailukyky sekä klusterin tarjoamat liiketoimintamahdollisuudet suomalaisille logistiikkayrityksille. Työssä käsitellään neljää perusliikennemuotoa: rautatie-, maantie-, meri- ja sisävesi-, sekä ilmaliikennettä. Tutkimuksen aineisto on kerätty tutkimusta varten laadituista kyselyistä, haastatteluista sekä aiemmin julkaistusta materiaalista. Venäjä on suunnitellut kehittävänsä voimakkaasti liikenneinfrastruktuuria, mm. julkaisemalla protektionistisen liikennestrategiasuunnitelman. Ongelmana ovat olleet toteutukset, jotka ovat jääneet yleensä puutteellisiksi. Tällä hetkellä todellista kilpailukykyä löytyy ainoastaan rautatieliikenteestä, muut kolme liikennemuotoa omaavat potentiaalisen kilpailukyvyn. Venäjällä on mahdollisuus hyötyä laajasta pinta-alastaan Aasian ja Euroopan liikenteen yhdistäjänä. Yksi konkreettisimmista esimerkeistä on Trans Siperian rautatie, joka kaipaisi vielä lisäkehitystä. Suomi on toiminut Venäjän liikenteessä arvotavaran kauttakulkumaana, vuonna 2003 noin 30–40 % Venäjän tuonnin arvosta kulki Suomen kautta. Venäjälle tullaan tuomaan arvotavaraa vielä useita vuosia, mutta reittien osalta kilpailu on tiukentunut. Suomalaisten yritysten liiketoimintamahdollisuuksiin esitetään kaksi mallia: kauttakulkuliikenteen lisäarvologistiset (VAL) operaatiot Suomessa tai etabloituminen Venäjän logistisiin ketjuihin. Suomalaisten olisi syytä parantaa yhteistyötään yritysten ja yliopistojen ym. koulutuslaitosten välillä. Myös yhteistyökumppaneiden hakeminen esimerkiksi Ruotsista voisi tuoda merkittäviä etuja. Suomalaista osaamista voitaisiin hyödyntää parhaiten etabloitumalla Venäjän markkinoille, esimerkiksi keskittymällä Venäjän logististen ketjujen johtamiseen. Myös VAL palveluiden johtamiseen Venäjällä olisi erittäin hyvä tilaisuus, koska Venäjän oma tietotaito logistiikassa ei ole vielä kehittynyt kansainväliselle tasolle, mutta kustannustaso on alhaisempi kuin Suomessa.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
[spa] El estudio analiza la evolución de los gases de efecto invernadero (GEI) y las emisiones de acidificación para Italia durante el periodo 1995-2005. Los datos muestran que mientras las emisiones que contribuyen a la acidificación han disminuido constantemente, las emisiones de GEI han aumentado debido al aumento de dióxido de carbono. El objetivo de este estudio es poner de relieve cómo diferentes factores económicos, en particular el crecimiento económico, el desarrollo de una tecnología menos contaminante y la estructura del consumo, han impulsado la evolución de las emisiones. La metodología propuesta es un análisis de descomposición estructural (ADE), método que permite descomponer los cambios de la variable de interés entre las diferentes fuerzas y revelar la importancia de cada factor. Por otra parte, este estudio considera la importancia del comercio internacional e intenta incluir el “problema de la responsabilidad”. Es decir, a través de las relaciones comerciales internacionales, un país podría estar exportando procesos de producción contaminantes sin una reducción real de la contaminación implícita en su patrón de consumo. Con este fin, siguiendo primero un enfoque basado en la “responsabilidad del productor”, el ADE se aplica a las emisiones causadas por la producción nacional. Sucesivamente, el análisis se mueve hacia un enfoque basado en la “responsabilidad del consumidor" y la descomposición se aplica a las emisiones relacionadas con la producción nacional o la producción extranjera que satisface la demanda interna. De esta manera, el ejercicio permite una primera comprobación de la importancia del comercio internacional y pone de relieve algunos resultados a nivel global y a nivel sectorial.
Resumo:
Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.
Resumo:
BACKGROUND AND PURPOSE: Transgenic mice overexpressing Notch2 in the uvea exhibit a hyperplastic ciliary body leading to increased IOP and glaucoma. The aim of this study was to investigate the possible presence of NOTCH2 variants in patients with primary open-angle glaucoma (POAG). METHODS: We screened DNA samples from 130 patients with POAG for NOTCH2 variants by denaturing high-performance liquid chromatography after PCR amplification and validated our data by direct Sanger sequencing. RESULTS: No mutations were observed in the coding regions of NOTCH2 or in the splice sites. 19 known SNPs (single nucleotide polymorphisms) were detected. An SNP located in intron 24, c.[4005+45A>G], was seen in 28.5% of the patients (37/130 patients). As this SNP is reported to have a minor allele frequency of 7% in the 1000 genomes database, it could be associated with POAG. However, we evaluated its frequency in an ethnic-matched control group of 96 subjects unaffected by POAG and observed a frequency of 29%, indicating that it was not related to POAG. CONCLUSION: NOTCH2 seemed to be a good candidate for POAG as it is expressed in the anterior segment in the human eye. However, mutational analysis did not show any causative mutation. This study also shows that proper ethnic-matched control groups are essential in association studies and that values given in databases are sometimes misleading.
Resumo:
Several clinical studies have reported that EEG synchrony is affected by Alzheimer’s disease (AD). In this paper a frequency band analysis of AD EEG signals is presented, with the aim of improving the diagnosis of AD using EEG signals. In this paper, multiple synchrony measures are assessed through statistical tests (Mann–Whitney U test), including correlation, phase synchrony and Granger causality measures. Moreover, linear discriminant analysis (LDA) is conducted with those synchrony measures as features. For the data set at hand, the frequency range (5-6Hz) yields the best accuracy for diagnosing AD, which lies within the classical theta band (4-8Hz). The corresponding classification error is 4.88% for directed transfer function (DTF) Granger causality measure. Interestingly, results show that EEG of AD patients is more synchronous than in healthy subjects within the optimized range 5-6Hz, which is in sharp contrast with the loss of synchrony in AD EEG reported in many earlier studies. This new finding may provide new insights about the neurophysiology of AD. Additional testing on larger AD datasets is required to verify the effectiveness of the proposed approach.