855 resultados para variance change point detection
Resumo:
Four models are employed in the landscape change detection of the newly created wetland. The models include ones for patch connectivity. ecological diversity, human impact intensity and mean center of land cover. The landscape data of the newly created wetland in Yellow River Delta in 1984, 1991, and 1996 are produced from the unsupervised classification and the supervised classification on the basis of integrating Landsat TM images of the newly created wetland in the four seasons of the each year. The result from operating the models into the data shows that the newly created wetland landscape in Yellow River Delta had a great chance. The driving focus of the change are mainly from natural evolution of the newly created wetland and rapid population growth, especially non-peasant population growth in Yellow River Delta because a considerable amount of oil and gas fields have been found in the Yellow River Delta. For preventing the newly created wetland from more destruction and conserving benign Succession of the ecosystems in the newly created wetland, six measures are suggested on the basis of research results. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The color change induced by triple hydrogen-bonding recognition between melamine and a cyanuric acid derivative grafted on the surface of gold nanoparticles can be used for reliable detection of melamine. Since such a color change can be readily seen by the naked eye, the method enables on-site and real-time detection of melamine in raw milk and infant formula even at a concentration as low as 2.5 ppb without the aid of any advanced instruments.
Resumo:
A specific impairment in phoneme awareness has been hypothesized as one of the current explanations for dyslexia. We examined attentional shifts towards phonological information as indexed by event-related potentials (ERPs) in normal readers and dyslexic adults. Participants performed a lexical decision task on spoken stimuli of which 80% started with a standard phoneme and 20% with a deviant phoneme. A P300 modulation was expected for deviants in control adults, indicating that the phonological change had been detected. A mild and right-lateralized P300 was observed for deviant stimuli in controls, but was absent in dyslexic adults. This result suggests that dyslexic adults fail to make shifts of attention to phonological cues in the same way that normal adult readers do. (C) 2003 Elsevier Ireland Ltd. All rights reserved.
Resumo:
To determine the age-related change in the peripheral short-wavelength-sensitive (SWS) grating contrast sensitivity function (CSF), cut-off spatial frequency (acuity) and contrast sensitivity for both a detection and resolution task were measured at 8 degrees eccentricity under conditions of SWS-cone isolation for 51 subjects (19-72 years). The acuity for both the detection and resolution task declined with age, the detection acuity being significantly higher than the resolution acuity at all ages (p
Resumo:
Loop-mediated isothermal amplification (LAMP) is an innovative technique that allows the rapid detection of target nucleic acid sequences under isothermal conditions without the need for complex instrumentation. The development, optimization, and clinical validation of a LAMP assay targeting the ctrA gene for the rapid detection of capsular Neisseria meningitidis were described. Highly specific detection of capsular N. meningitidis type strains and clinical isolates was demonstrated, with no cross-reactivity with other Neisseria spp. or with a comprehensive panel of other common human pathogens. The lower limit of detection was 6 ctrA gene copies detectable in 48 min, with positive reactions readily identifiable visually via a simple color change. Higher copy numbers could be detected in as little as 16 min. When applied to a total of 394 clinical specimens, the LAMP assay in comparison to a conventional TaqMan® based real-time polymerase chain reaction system demonstrated a sensitivity of 100% and a specificity of 98.9% with a ? coefficient of 0.942. The LAMP method represents a rapid, sensitive, and highly specific technique for the detection of N. meningitidis and has the potential to be used as a point-of-care molecular test and in resource-poor settings.
Resumo:
Loop-mediated isothermal amplification (LAMP) is an innovative technique that allows the rapid detection of target nucleic acid sequences under isothermal conditions without the need for complex instrumentation. The development, optimization, and clinical validation of a LAMP assay targeting the ctrA gene for the rapid detection of capsular Neisseria meningitidis were described. Highly specific detection of capsular N. meningitidis type strains and clinical isolates was demonstrated, with no cross-reactivity with other Neisseria spp. or with a comprehensive panel of other common human pathogens. The lower limit of detection was 6 ctrA gene copies detectable in 48 min, with positive reactions readily identifiable visually via a simple color change. Higher copy numbers could be detected in as little as 16 min. When applied to a total of 394 clinical specimens, the LAMP assay in comparison to a conventional TaqMan® based real-time polymerase chain reaction system demonstrated a sensitivity of 100% and a specificity of 98.9% with a ? coefficient of 0.942. The LAMP method represents a rapid, sensitive, and highly specific technique for the detection of N. meningitidis and has the potential to be used as a point-of-care molecular test and in resource-poor settings.
Resumo:
This work describes a novel use for the polymeric film, poly(o-aminophenol) (PAP) that was made responsive to a specific protein. This was achieved through templated electropolymerization of aminophenol (AP) in the presence of protein. The procedure involved adsorbing protein on the electrode surface and thereafter electroploymerizing the aminophenol. Proteins embedded at the outer surface of the polymeric film were digested by proteinase K and then washed away thereby creating vacant sites. The capacity of the template film to specifically rebind protein was tested with myoglobin (Myo), a cardiac biomarker for ischemia. The films acted as biomimetic artificial antibodies and were produced on a gold (Au) screen printed electrode (SPE), as a step towards disposable sensors to enable point-of-care applications. Raman spectroscopy was used to follow the surface modification of the Au-SPE. The ability of the material to rebind Myo was measured by electrochemical techniques, namely electrochemical impedance spectroscopy (EIS) and square wave voltammetry (SWV). The devices displayed linear responses to Myo in EIS and SWV assays down to 4.0 and 3.5 μg/mL, respectively, with detection limits of 1.5 and 0.8 μg/mL. Good selectivity was observed in the presence of troponin T (TnT) and creatine kinase (CKMB) in SWV assays, and accurate results were obtained in applications to spiked serum. The sensor described in this work is a potential tool for screening Myo in point-of-care due to the simplicity of fabrication, disposability, short time response, low cost, good sensitivity and selectivity.
Resumo:
Among PET radiotracers, FDG seems to be quite accepted as an accurate oncology diagnostic tool, frequently helpful also in the evaluation of treatment response and in radiation therapy treatment planning for several cancer sites. To the contrary, the reliability of Choline as a tracer for prostate cancer (PC) still remains an object of debate for clinicians, including radiation oncologists. This review focuses on the available data about the potential impact of Choline-PET in the daily clinical practice of radiation oncologists managing PC patients. In summary, routine Choline-PET is not indicated for initial local T staging, but it seems better than conventional imaging for nodal staging and for all patients with suspected metastases. In these settings, Choline-PET showed the potential to change patient management. A critical limit remains spatial resolution, limiting the accuracy and reliability for small lesions. After a PSA rise, the problem of the trigger PSA value remains crucial. Indeed, the overall detection rate of Choline-PET is significantly increased when the trigger PSA, or the doubling time, increases, but higher PSA levels are often a sign of metastatic spread, a contraindication for potentially curable local treatments such as radiation therapy. Even if several published data seem to be promising, the current role of PET in treatment planning in PC patients to be irradiated still remains under investigation. Based on available literature data, all these issues are addressed and discussed in this review.
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
In this paper we search the concept of nonviolent resistance inquiring a rural experience from Rio Negro´s steppe. The initiative highlights the need to recognize the context of the resistance exercise and the consideration of three aspects: the evaluation and interpretation of space, the dispute to public policy and the restructuring of the family order. These three elements, which overlap material and symbolic aspects, are discussed from an organization of trade domestic craft production. The notion of “development” is discovered on the basis of the frameworks of values from which the reproduction of subordination is associated with this idea, and even the challenge of change is the basis of the proposal which reviews that development idea, and illuminates from this complexity the notion of “nonviolent resistance”.
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.
Resumo:
The Intergovernmental Panel on Climate Change fourth assessment report, published in 2007 came to a more confident assessment of the causes of global temperature change than previous reports and concluded that ‘it is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent except Antarctica.’ Since then, warming over Antarctica has also been attributed to human influence, and further evidence has accumulated attributing a much wider range of climate changes to human activities. Such changes are broadly consistent with theoretical understanding, and climate model simulations, of how the planet is expected to respond. This paper reviews this evidence from a regional perspective to reflect a growing interest in understanding the regional effects of climate change, which can differ markedly across the globe. We set out the methodological basis for detection and attribution and discuss the spatial scales on which it is possible to make robust attribution statements. We review the evidence showing significant human-induced changes in regional temperatures, and for the effects of external forcings on changes in the hydrological cycle, the cryosphere, circulation changes, oceanic changes, and changes in extremes. We then discuss future challenges for the science of attribution. To better assess the pace of change, and to understand more about the regional changes to which societies need to adapt, we will need to refine our understanding of the effects of external forcing and internal variability