832 resultados para Recursive Filtering
Resumo:
The vulnerability to pollution and hydrochemical variation of groundwater in the mid-west karstic lowlands of Ireland were investigated from October 1992 to September 1993, as part of an EU STRIDE project at Sligo Regional Technical College. Eleven springs were studied in the three local authority areas of Co. Galway, Co. Mayo, and Co. Roscommon. Nine of the springs drain locally or regionally important karstic aquifers and two drain locally important sand and gravel aquifers. The maximum average daily discharge of any of the springs was 16,000 m3/day. Determination of the vulnerability of groundwater to pollution relies heavily on an examination of subsoil deposits in an area since they can act as a protecting or filtering layer over groundwater. Within aquifers/spring catchments, chemical reactions such as adsorption, solution-precipitation or acid-base reactions occur and modify the hydrochemistry of groundwater (Lloyd and Heathcote, 1985). The hydrochemical processes) that predominate depend cm the mineralogy of the aquifer, the hydrogeological environment, the overlying subsoils, and the history of groundwater movement. The aim of this MSc research thesis was to investigate the hydrochemical variation of spring outflow and to assess the relationship between these variations and the intrinsic vulnerability of the springs and their catchments. If such a relationship can be quantified, then it is hoped that the hydrochemical variation of a spring may indicate the vulnerability of a spring catchment without the need for determining it by field mapping. Such a method would be invaluable to any of the three local authorities since they would be able to prioritise sources that are most at risk from pollution, using simple techniques of chemical sampling, and statistical analysis. For each spring a detailed geological, hydrogeological and hydrochemical study was carried out. Individual catchment areas were determined with a water balance/budget and groundwater tracing. The subsoils geology for each spring catchment were mapped at the 1:10,560 scale and digitised to the 1:25,000 scale with AutoCad™ and Arclnfo™. The vulnerability of each spring was determined using the Geological Survey's vulnerability guidelines. Field measurements and laboratory based chemistry analyses of the springs were undertaken by personnel from both the EPA Regional Laboratory in Castlebar, Co. Mayo, and the Environment Section of Roscommon Co. Council. Electrical conductivity and temperature (°C) were sampled fortnightly, in the field, using a WTW microprocessor conductivity meter. A percentage (%) vulnerability was applied to each spring in order to indicate the areal extent of the four main classes of vulnerability (Extreme, High, Moderate, and Low) which occurred within the confines of each spring catchment. Hydrochemical variation for the springs were presented as the coefficient of variation of electrical conductivity. The results of this study show that a clear relationship exists between the degree of vulnerability of each catchment area as defined by the subsoil cover and the coefficient of variation of EC, with the coefficient of variation increasing as the vulnerability increases. The coefficient of variation of electrical conductivity is considered to be a parameter that gives a good general reflection of the degree of vulnerability occurring in a spring catchment in Ireland's karstic lowlands.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.
Resumo:
We analyzed the effects of environmental factors on abundance, species richness, and functional group richness of Leptophlebiidae in 16 sampling points along four Cerrado streams. Across three periods of 2005, we collected 5,492 larvae from 14 species in stream bed substrate. These species belong to three functional feeding groups: scrapers, filtering collectors and shredders. The abundance and species richness were not affected by water quality, but habitat quality related to presence of riparian vegetation had positive effects on the abundance of shredders. Our results add important information on the natural history of the species and functional groups of aquatic insects and also provide relevant data for the monitoring and conservation of streams in the Brazilian Cerrado.
Resumo:
We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The sharing problem is formulated in the preferences-endowments space. The solution is defined in a recursive manner incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and refer to it as an Ordinal Shapley value (OSV). The OSV associates with each problem an allocation as well as a matrix of concessions ``measuring'' the gains each agent foregoes in favor of the other agents. We analyze the structure of the concessions, and show they are unique and symmetric. Next we characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone in an agent's initial endowments and satisfies anonymity. Finally, similarly to the weighted Shapley value for TU games, we construct a weighted OSV as well.
Resumo:
BACKGROUND: Cone-beam computed tomography (CBCT) image-guided radiotherapy (IGRT) systems are widely used tools to verify and correct the target position before each fraction, allowing to maximize treatment accuracy and precision. In this study, we evaluate automatic three-dimensional intensity-based rigid registration (RR) methods for prostate setup correction using CBCT scans and study the impact of rectal distension on registration quality. METHODS: We retrospectively analyzed 115 CBCT scans of 10 prostate patients. CT-to-CBCT registration was performed using (a) global RR, (b) bony RR, or (c) bony RR refined by a local prostate RR using the CT clinical target volume (CTV) expanded with 1-to-20-mm varying margins. After propagation of the manual CT contours, automatic CBCT contours were generated. For evaluation, a radiation oncologist manually delineated the CTV on the CBCT scans. The propagated and manual CBCT contours were compared using the Dice similarity and a measure based on the bidirectional local distance (BLD). We also conducted a blind visual assessment of the quality of the propagated segmentations. Moreover, we automatically quantified rectal distension between the CT and CBCT scans without using the manual CBCT contours and we investigated its correlation with the registration failures. To improve the registration quality, the air in the rectum was replaced with soft tissue using a filter. The results with and without filtering were compared. RESULTS: The statistical analysis of the Dice coefficients and the BLD values resulted in highly significant differences (p<10(-6)) for the 5-mm and 8-mm local RRs vs the global, bony and 1-mm local RRs. The 8-mm local RR provided the best compromise between accuracy and robustness (Dice median of 0.814 and 97% of success with filtering the air in the rectum). We observed that all failures were due to high rectal distension. Moreover, the visual assessment confirmed the superiority of the 8-mm local RR over the bony RR. CONCLUSION: The most successful CT-to-CBCT RR method proved to be the 8-mm local RR. We have shown the correlation between its registration failures and rectal distension. Furthermore, we have provided a simple (easily applicable in routine) and automatic method to quantify rectal distension and to predict registration failure using only the manual CT contours.
Resumo:
Severini and Mansour introduced in [4]square polygons, as graphical representations of square permutations, that is, permutations such that all entries are records (left or right, minimum or maximum), and they obtained a nice formula for their number. In this paper we give a recursive construction for this class of permutations, that allows to simplify the derivation of their formula and to enumerate the subclass of square permutations with a simple record polygon. We also show that the generating function of these permutations with respect to the number of records of each type is algebraic, answering a question of Wilf in a particular case.
Resumo:
We consider cooperative environments with externalities (games in partition function form) and provide a recursive definition of dividends for each coalition and any partition of the players it belongs to. We show that with this definition and equal sharing of these dividends the averaged sum of dividends for each player, over all the coalitions that contain the player, coincides with the corresponding average value of the player. We then construct weighted Shapley values by departing from equal division of dividends and finally, for each such value, provide a bidding mechanism implementing it.
Resumo:
Brain metastases occur in 20-50% of NSCLC and 50-80% of SCLC. In this review, we will look at evidence-based medicine data and give some perspectives on the management of BM. We will address the problems of multiple BM, single BM and prophylactic cranial irradiation. Recursive Partitioning Analysis (RPA) is a powerful prognostic tool to facilitate treatment decisions. Dealing with multiple BM, the use of corticosteroids was established more than 40 years ago by a unique randomized trial (RCT). Palliative effect is high (_80%) as well as side-effects. Whole brain radiotherapy (WBRT) was evaluated in many RCTs with a high (60-90%) response rate; several RT regimes are equivalent, but very high dose per fraction should be avoided. In multiple BM from SCLC, the effect of WBRT is comparable to that in NSCLC but chemotherapy (CXT) although advocated is probably less effective than RT. Single BM from NSCLC occurs in 30% of all BM cases; several prognostic classifications including RPA are very useful. Several options are available in single BM: WBRT, surgery (SX), radiosurgery (RS) or any combination of these. All were studied in RCTs and will be reviewed: the addition of WBRT to SX or RS gives a better neurological tumour control, has little or no impact on survival, and may be more toxic. However omitting WBRT after SX alone gives a higher risk of cerebro-spinal fluid dissemination. Prophylactic cranial irradiation (PCI) has a major role in SCLC. In limited disease, meta-analyses have shown a positive impact of PCI in the decrease of brain relapse and in survival improvement, especially for patients in complete remission. Surprisingly, this has been recently confirmed also in extensive disease. Experience with PCI for NSCLC is still limited, but RCT suggest a reduction of BM with no impact on survival. Toxicity of PCI is a matter of debate, as neurological or neuro-cognitive impairment is already present prior to PCI in almost half of patients. However RT toxicity is probably related to total dose and dose per fraction. Perspectives : Future research should concentrate on : 1) combined modalities in multiple BM. 2) Exploration of treatments in oligo-metastases. 3) Further exploration of PCI in NSCLC. 4) Exploration of new, toxicity-sparing radiotherapy techniques (IMRT, Tomotherapy etc).
Resumo:
We study how the use of judgement or “add-factors” in forecasting may disturb the set of equilibrium outcomes when agents learn using recursive methods. We isolate conditions under which new phenomena, which we call exuberance equilibria, can exist in a standard self-referential environment. Local indeterminacy is not a requirement for existence. We construct a simple asset pricing example and find that exuberance equilibria, when they exist, can be extremely volatile relative to fundamental equilibria.
Resumo:
This paper demonstrates that an asset pricing model with least-squares learning can lead to bubbles and crashes as endogenous responses to the fundamentals driving asset prices. When agents are risk-averse they need to make forecasts of the conditional variance of a stock’s return. Recursive updating of both the conditional variance and the expected return implies several mechanisms through which learning impacts stock prices. Extended periods of excess volatility, bubbles and crashes arise with a frequency that depends on the extent to which past data is discounted. A central role is played by changes over time in agents’ estimates of risk.
Resumo:
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ∼300ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
The ability to model biodiversity patterns is of prime importance in this era of severe environmental crisis. Species assemblage along environmental gradient is subject to the interplay of biotic interactions in complement to abiotic environmental filtering. Accounting for complex biotic interactions for a wide array of species remains so far challenging. Here, we propose to use food web models that can infer the potential interaction links between species as a constraint in species distribution models. Using a plant-herbivore (butterfly) interaction dataset, we demonstrate that this combined approach is able to improve both species distribution and community forecasts. Most importantly, this combined approach is very useful in rendering models of more generalist species that have multiple potential interaction links, where gap in the literature may be recurrent. Our combined approach points a promising direction forward to model the spatial variation of entire species interaction networks. Our work has implications for studies of range shifting species and invasive species biology where it may be unknown how a given biota might interact with a potential invader or in future climate.
Resumo:
The present project has performed the study and development of a new technique for the detection of gases with range resolution. This technique called FMCW-lidar is a technique that evolves from the FMCW-radar technique to be applied to lidar systems. Moreover, it takes advantage of the appearance of spectral absorption lines because of the interaction between light and gases to tune the light wavelength of a laser emitter with one of this spectral lines and then detects the backscattered light and analyzes it in order to obtain gas concentration measurements. The first part of the project consisted in the analysis of the WMS technique which is a technique for the in-situ measurement of gases. A complete theoretical analysis has been performed and some experiments have been carried out in order to test the technique and to validate its application to an FMCW-modulated system for the detection of gases. The second part of the project consisted in the analysis of the lidar FMCW technique for solid target detection and its extension to continuous media. The classical form of this technique has been analyzed for a distributed medium and a filtering effect has been found which prevents the accurate acquisition of the medium response. A modification of the technique has been proposed and a validation via simulations and some experiments has been carried on. After performing these tests, a novel system is proposed to be developed and tested in order to perform the indicated gas detection with range resolution.
Resumo:
An increasing number of studies have sprung up in recent years seeking to identify individual inventors from patent data. Different heuristics have been suggested to use their names and other information disclosed in patent documents in order to find out “who is who” in patents. This paper contributes to this literature by setting forth a methodology to identify them using patents applied to the European Patent Office (EPO hereafter). As in the large part of this literature, we basically follow a three-steps procedure: (1) the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; (2) the matching stage, where name matching algorithms are used to group possible similar names; (3) the filtering stage, where additional information and different scoring schemes are used to filter out these potential same inventors. The paper includes some figures resulting of applying the algorithms to the set of European inventors applying to the EPO for a large period of time.