981 resultados para Geometry processing
Resumo:
Drainage-basin and channel-geometry multiple-regression equations are presented for estimating design-flood discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at stream sites on rural, unregulated streams in Iowa. Design-flood discharge estimates determined by Pearson Type-III analyses using data collected through the 1990 water year are reported for the 188 streamflow-gaging stations used in either the drainage-basin or channel-geometry regression analyses. Ordinary least-squares multiple-regression techniques were used to identify selected drainage-basin and channel-geometry regions. Weighted least-squares multiple-regression techniques, which account for differences in the variance of flows at different gaging stations and for variable lengths in station records, were used to estimate the regression parameters. Statewide drainage-basin equations were developed from analyses of 164 streamflow-gaging stations. Drainage-basin characteristics were quantified using a geographic-information-system (GIS) procedure to process topographic maps and digital cartographic data. The significant characteristics identified for the drainage-basin equations included contributing drainage area, relative relief, drainage frequency, and 2-year, 24-hour precipitation intensity. The average standard errors of prediction for the drainage-basin equations ranged from 38.6% to 50.2%. The GIS procedure expanded the capability to quantitatively relate drainage-basin characteristics to the magnitude and frequency of floods for stream sites in Iowa and provides a flood-estimation method that is independent of hydrologic regionalization. Statewide and regional channel-geometry regression equations were developed from analyses of 157 streamflow-gaging stations. Channel-geometry characteristics were measured on site and on topographic maps. Statewide and regional channel-geometry regression equations that are dependent on whether a stream has been channelized were developed on the basis of bankfull and active-channel characteristics. The significant channel-geometry characteristics identified for the statewide and regional regression equations included bankfull width and bankfull depth for natural channels unaffected by channelization, and active-channel width for stabilized channels affected by channelization. The average standard errors of prediction ranged from 41.0% to 68.4% for the statewide channel-geometry equations and from 30.3% to 70.0% for the regional channel-geometry equations. Procedures provided for applying the drainage-basin and channel-geometry regression equations depend on whether the design-flood discharge estimate is for a site on an ungaged stream, an ungaged site on a gaged stream, or a gaged site. When both a drainage-basin and a channel-geometry regression-equation estimate are available for a stream site, a procedure is presented for determining a weighted average of the two flood estimates.
Resumo:
Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making inmany situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications abouthowto structure knowledge used by crime scene examiners in their effective practice (part II).
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
Deficits in the processing of sensory reafferences have been suggested as accounting for age-related decline in motor coordination. Whether sensory reafferences are accurately processed can be assessed based on the bimanual advantage in tapping: because of tapping with an additional hand increases kinesthetic reafferences, bimanual tapping is characterized by a reduced inter-tap interval variability than unimanual tapping. A suppression of the bimanual advantage would thus indicate a deficit in sensory reafference. We tested whether elderly indeed show a reduced bimanual advantage by measuring unimanual (UM) and bimanual (BM) self-paced tapping performance in groups of young (n = 29) and old (n = 27) healthy adults. Electroencephalogram was recorded to assess the underlying patterns of oscillatory activity, a neurophysiological mechanism advanced to support the integration of sensory reafferences. Behaviorally, there was a significant interaction between the factors tapping condition and age group at the level of the inter-tap interval variability, driven by a lower variability in BM than UM tapping in the young, but not in the elderly group. This result indicates that in self-paced tapping, the bimanual advantage is absent in elderly. Electrophysiological results revealed an interaction between tapping condition and age group on low beta band (14âeuro"20 Hz) activity. Beta activity varied depending on the tapping condition in the elderly but not in the young group. Source estimations localized this effect within left superior parietal and left occipital areas. We interpret our results in terms of engagement of different mechanisms in the elderly depending on the tapping mode: a âeuro~kinestheticâeuro? mechanism for UM and a âeuro~visual imageryâeuro? mechanism for BM tapping movement.
Resumo:
Résumé La réalisation d'une seconde ligne de métro (M2) dès 2004, passant dans le centre ville de Lausanne, a été l'opportunité de développer une méthodologie concernant des campagnes microgravimétriques dans un environnement urbain perturbé. Les corrections topographiques prennent une dimension particulière dans un tel milieu, car de nombreux objets non géologiques d'origine anthropogénique comme toutes sortes de sous-sols vides viennent perturber les mesures gravimétriques. Les études de génie civil d'avant projet de ce métro nous ont fournis une quantité importante d'informations cadastrales, notamment sur les contours des bâtiments, sur la position prévue du tube du M2, sur des profondeurs de sous-sol au voisinage du tube, mais aussi sur la géologie rencontré le long du corridor du M2 (issue des données lithologiques de forages géotechniques). La planimétrie des sous-sols a été traitée à l'aide des contours des bâtiments dans un SIG (Système d'Information Géographique), alors qu'une enquête de voisinage fut nécessaire pour mesurer la hauteur des sous-sols. Il a été alors possible, à partir d'un MNT (Modèle Numérique de Terrain) existant sur une grille au mètre, de mettre à jour celui ci avec les vides que représentent ces sous-sols. Les cycles de mesures gravimétriques ont été traités dans des bases de données Ac¬cess, pour permettre un plus grand contrôle des données, une plus grande rapidité de traitement, et une correction de relief rétroactive plus facile, notamment lorsque des mises à jour de la topographie ont lieu durant les travaux. Le quartier Caroline (entre le pont Bessières et la place de l'Ours) a été choisi comme zone d'étude. Le choix s'est porté sur ce quartier du fait que, durant ce travail de thèse, nous avions chronologiquement les phases pré et post creusement du tunnel du M2. Cela nous a permis d'effectuer deux campagnes gravimétriques (avant le creu¬sement durant l'été 2005 et après le creusement durant l'été 2007). Ces réitérations nous ont permis de tester notre modélisation du tunnel. En effet, en comparant les mesures des deux campagnes et la réponse gravifique du modèle du tube discrétisé en prismes rectangulaires, nous avons pu valider notre méthode de modélisation. La modélisation que nous avons développée nous permet de construire avec détail la forme de l'objet considéré avec la possibilité de recouper plusieurs fois des interfaces de terrains géologiques et la surface topographique. Ce type de modélisation peut s'appliquer à toutes constructions anthropogéniques de formes linéaires. Abstract The realization of a second underground (M2) in 2004, in downtown Lausanne, was the opportunity to develop a methodology of microgravity in urban environment. Terrain corrections take on special meaning in such environment. Many non-geologic anthropogenic objects like basements act as perturbation of gravity measurements. Civil engineering provided a large amount of cadastral informations, including out¬lines of buildings, M2 tube position, depths of some basements in the vicinity of the M2 corridor, and also on the geology encountered along the M2 corridor (from the lithological data from boreholes). Geometry of basements was deduced from building outlines in a GIS (Geographic Information System). Field investigation was carried out to measure or estimate heights of basements. A DEM (Digital Elevation Model) of the city of Lausanne is updated from voids of basements. Gravity cycles have been processed in Access database, to enable greater control of data, enhance speed processing, and retroactive terrain correction easier, when update of topographic surface are available. Caroline area (between the bridge Saint-Martin and Place de l'Ours) was chosen as the study area. This area was in particular interest because it was before and after digging in this thesis. This allowed us to conduct two gravity surveys (before excavation during summer 2005 and after excavation during summer 2007). These re-occupations enable us to test our modélisation of the tube. Actually, by comparing the difference of measurements between the both surveys and the gravity response of our model (by rectangular prisms), we were able to validate our modeling. The modeling method we developed allows us to construct detailed shape of an object with possibility to cross land geological interfaces and surface topography. This type of modélisation can be applied to all anthropogenic structures.
Resumo:
Remote sensing image processing is nowadays a mature research area. The techniques developed in the field allow many real-life applications with great societal value. For instance, urban monitoring, fire detection or flood prediction can have a great impact on economical and environmental issues. To attain such objectives, the remote sensing community has turned into a multidisciplinary field of science that embraces physics, signal theory, computer science, electronics, and communications. From a machine learning and signal/image processing point of view, all the applications are tackled under specific formalisms, such as classification and clustering, regression and function approximation, image coding, restoration and enhancement, source unmixing, data fusion or feature selection and extraction. This paper serves as a survey of methods and applications, and reviews the last methodological advances in remote sensing image processing.
Resumo:
The structural saturation and stability, the energy gap, and the density of states of a series of small, silicon-based clusters have been studied by means of the PM3 and some ab initio (HF/6-31G* and 6-311++G**, CIS/6-31G* and MP2/6-31G*) calculations. It is shown that in order to maintain a stable nanometric and tetrahedral silicon crystallite and remove the gap states, the saturation atom or species such as H, F, Cl, OH, O, or N is necessary, and that both the cluster size and the surface species affect the energetic distribution of the density of states. This research suggests that the visible luminescence in the silicon-based nanostructured material essentially arises from the nanometric and crystalline silicon domains but is affected and protected by the surface species, and we have thus linked most of the proposed mechanisms of luminescence for the porous silicon, e.g., the quantum confinement effect due to the cluster size and the effect of Si-based surface complexes.
Resumo:
In this paper, we describe several techniques for detecting tonic pitch value in Indian classical music. In Indian music, the raga is the basic melodic framework and it is built on the tonic. Tonic detection is therefore fundamental for any melodic analysis in Indian classical music. This workexplores detection of tonic by processing the pitch histograms of Indian classic music. Processing of pitch histograms using group delay functions and its ability to amplify certain traits of Indian music in the pitch histogram, is discussed. Three different strategies to detect tonic, namely, the concert method, the template matching and segmented histogram method are proposed. The concert method exploits the fact that the tonic is constant over a piece/concert.templatematchingmethod and segmented histogrammethodsuse the properties: (i) the tonic is always present in the background, (ii) some notes are less inflected and dominant, to detect the tonic of individual pieces. All the three methods yield good results for Carnatic music (90−100% accuracy), while for Hindustanimusic, the templatemethod works best, provided the v¯adi samv¯adi notes for a given piece are known (85%).
Resumo:
The efficiency of combining high-pressure processing (HPP) and active packaging technologies to control Listeria monocytogenes growth during the shelf life of artificially inoculated cooked ham was assessed. Three lots of cooked ham were prepared: control, packaging with alginate films, and packaging with antimicrobial alginate films containing enterocins. After packaging, half of the samples were pressurized. Sliced cooked ham stored at 6 °C experienced a quick growth of L. monocytogenes. Both antimicrobial packaging and pressurization delayed the growth of the pathogen. However, at 6 °C the combination of antimicrobial packaging and HPP was necessary to achieve a reduction of inoculated levels without recovery during 60 days of storage. Further storage at 6 °C of pressurized antimicrobial packed cooked ham resulted in L. monocytogenes levels below the detection limit (day 90). On the other hand, storage at 1 °C controlled the growth of the pathogen until day 39 in non-pressurized ham, while antimicrobial packaging and storage at 1 °C exerted a bacteriostatic effect for 60 days. All HPP lots stored at 1 °C led to counts <100 CFU/g at day 60. Similar results were observed when combining both technologies. After a cold chain break no growth of L. monocytogenes was observed in pressurized ham packed with antimicrobial films, showing the efficiency of combining both technologies.
Resumo:
The effect of high pressure processing (400 MPa for 10 min) and natural antimicrobials 2 (enterocins and lactate-diacetate) on the behaviour of L. monocytogenes in sliced cooked ham 3 during refrigerated storage (1ºC and 6ºC) was assessed. The efficiency of the treatments after a 4 cold chain break was evaluated. Lactate-diacetate exerted a bacteriostatic effect against L. 5 monocytogenes during the whole storage period (3 months) at 1ºC and 6ºC, even after 6 temperature abuse. The combination of low storage temperature (1ºC), high pressure 7 processing (HPP) and addition of lactate-diacetate reduced the levels of L. monocytogenes 8 during storage by 2.7 log CFU/g. The most effective treatment was the combination of HPP, 9 enterocins and refrigeration at 1ºC, which reduced the population of the pathogen to final counts 10 of 4 MPN/g after 3 months of storage, even after the cold chain break.