986 resultados para depth-first


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a search for the mechanism of the induced reduction reaction that occurred in X-ray photoelectron Spectroscopy (XPS) depth profiles measured experimentally on CeO2/Si epilayers grown by ion beam epitaxy (IBE), several possibilities have been checked. The first possibility, that the X-ray induces the reaction, has been ruled out by experimentation. Other possible models for the incident-ion induced reaction, one based on short-range interaction (direct collision) and the other based on long-range potential accompanied with the incident-ions, have been tested by simulation on computer. The results proved that the main mechanism is the former, not the latter. (C) 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often it is assumed that absorbance decays in photochromic materials with the time dependence of the photochemical kinetics, i.e. exponentially for first order kinetics. Although this may hold in the limiting case of vanishing absorbance, deviations are to be expected for realistic samples, because the local photochemical kinetics slows down with increasing initial absorption and penetration depth of the radiation. We discuss the theory of the kinetics of initially homogeneous photochromic samples and derive analytical solutions. In extension of Tomlinson's theory we find an analytical solution that holds with good approximation even for samples that exhibit a small residual absorption in the saturation limit. The theoretical time dependence of the absorbance originating from photochemical first order kinetics of dye-doped systems is compared with experimental data published by Lafond et al. for fulgides doped in different polymer matrices. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavefront coding is a powerful technique that can be used to extend the depth of field of an incoherent imaging system. By adding a suitable phase mask to the aperture plane, the optical transfer function of a conventional imaging system can be made defocus invariant. Since 1995, when a cubic phase mask was first suggested, many kinds of phase masks have been proposed to achieve the goal of depth extension. In this Letter, a phase mask based on sinusoidal function is designed to enrich the family of phase masks. Numerical evaluation demonstrates that the proposed mask is not only less sensitive to focus errors than cubic, exponential, and modified logarithmic masks are, but it also has a smaller point-spread-function shifting effect. (C) 2010 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mixed layer depth (MLD) in the upper ocean is an important physical parameter for describing the upper ocean mixed layer. We analyzed several major factors influencing the climatological mixed layer depth (CMLD), and established a numerical simulation in the South China Sea (SCS) using the Regional Ocean Model System (ROMS) with a high-resolution (1/12A degrees x1/12A degrees) grid nesting method and 50 vertical layers. Several ideal numerical experiments were tested by modifying the existing sea surface boundary conditions. Especially, we analyzed the sensitivity of the results simulated for the CMLD with factors of sea surface wind stress (SSWS), sea surface net heat flux (SSNHF), and the difference between evaporation and precipitation (DEP). The result shows that of the three factors that change the depth of the CMLD, SSWS is in the first place, when ignoring the impact of SSWS, CMLD will change by 26% on average, and its effect is always to deepen the CMLD; the next comes SSNHF (13%) for deepening the CMLD in October to January and shallowing the CMLD in February to September; and the DEP comes in the third (only 2%). Moreover, we analyzed the temporal and spatial characteristics of CMLD and compared the simulation result with the ARGO observational data. The results indicate that ROMS is applicable for studying CMLD in the SCS area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between date of first description and size, geographic range and depth of occurrence is investigated for 18 orders of marine holozooplankton (comprising over 4000 species). Results of multiple regression analyses suggest that all attributes are linked, which reflects the complex interplay between them. Partial correlation coefficients suggest that geographic range is the most important predictor of description date, and shows an inverse relationship. By contrast, size is generally a poor indicator of description date, which probably mirrors the size-independent way in which specimens are collected, though there is clearly a positive relationship between both size and depth (for metabolic/trophic reasons), and size and geographic range. There is also a positive relationship between geographic range and depth that probably reflects the near constant nature of the deep-water environment and the wide-ranging currents to be found there. Although we did not explicitly incorporate either abundance or location into models predicting the date of first description, neither should be ignored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article provides an in-depth analysis of selective land use and resource management policies in the Province of Ontario, Canada. It examines their relative capacity to recognize the rights of First Nations and Aboriginal peoples and their treaty rights, as well as their embodiment of past Crown–First Nations relationships. An analytical framework was developed to evaluate the manifest and latent content of 337 provincial texts, including 32 provincial acts, 269 regulatory documents, 16 policy statements, and 5 provincial plans. This comprehensive document analysis classified and assessed how current provincial policies address First Nation issues and identified common trends and areas of improvement. The authors conclude that there is an immediate need for guidance on how provincial authorities can improve policy to make relationship-building a priority to enhance and sustain relationships between First Nations and other jurisdictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The measured angular differential cross section (DCS) for the elastic scattering of electrons from Ar+(3s2 3p5 2P) at the collision energy of 16 eV is presented. By solving the Hartree-Fock equations, we calculate the corresponding theoretical DCS including the coupling between the orbital angular momenta and spin of the incident electron and those of the target ion and also relaxation effects. Since the collision energy is above one inelastic threshold for the transition 3s2 3p5 2P–3s 3p6 2S, we consider the effects on the DCS of inelastic absorption processes and elastic resonances. The measurements deviate significantly from the Rutherford cross section over the full angular range observed, especially in the region of a deep minimum centered at approximately 75°. Our theory and an uncoupled, unrelaxed method using a local, spherically symmetric potential by Manson [Phys. Rev. 182, 97 (1969)] both reproduce the overall shape of the measured DCS, although the coupled Hartree-Fock approach describes the depth of the minimum more accurately. The minimum is shallower in the present theory owing to our lower average value for the d-wave non-Coulomb phase shift s2, which is due to the high sensitivity of s2 to the different scattering potentials used in the two models. The present measurements and calculations therefore show the importance of including coupling and relaxation effects when accurately modeling electron-ion collisions. The phase shifts obtained by fitting to the measurements are compared with the values of Manson and the present method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anew species of fossil polyplacophoran from the Danian (Lower Palaeocene) of Denmark is described from over 450 individual disarticulated plates. The polyplacophorans originate from the 'nose-chalk' in the classical Danish locality of Fakse Quarry, an unconsolidated coral limestone in which aragonitic mollusc shells are preserved through transformation into calcite. In plate architecture and sculpture, the new Danish material is similar to Recent Leptochiton spp., but differs in its underdeveloped apophyses and high dorsal elevation (height/width ca. 0.54). Cladistic analysis of 55 original shell characters coded for more than loo Recent and fossil species in the order Lepiclopleurida shows very high resolution of interspecific relationships, but does not consistently recover traditional genera or subgenera. Inter-relationships within the suborder Lepidopleurina are of particular interest as it is often considered the most 'basal' neoloricate lineage. In a local context, the presence of chitons in the faunal assemblage of Fakse contributes evidence of shallow depositional depth for at least some elements of this Palaeocene seabed, a well-studied formation of azooxanthellic coral limestones. This new record for Denmark represents a well-dated and ecologically well-understood fossil chiton with potential value for understanding the radiation of the Neotoricata.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The A-level Mathematics qualification is based on a compulsory set of pure maths modules and a selection of applied maths modules. The flexibility in choice of applied modules has led to concerns that many students would proceed to study engineering at university with little background in mechanics. A survey of aerospace and mechanical engineering students in our university revealed that a combination of mechanics and statistics (the basic module in both) was by far the most popular choice of optional modules in A-level Mathematics, meaning that only about one-quarter of the class had studied mechanics beyond the basic module within school mathematics. Investigation of student performance in two core, first-year engineering courses, which build on a mechanics foundation, indicated that any benefits for students who studied the extra mechanics at school were small. These results give concern about the depth of understanding in mechanics gained during A-level Mathematics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present optical and near-infrared (NIR) photometry and spectroscopy of the Type IIb supernova (SN) 2011dh for the first 100 days. We complement our extensive dataset with Swift ultra-violet (UV) and Spitzer mid-infrared (MIR) data to build a UV to MIR bolometric lightcurve using both photometric and spectroscopic data. Hydrodynamical modelling of the SN based on this bolometric lightcurve have been presented in Bersten et al. (2012, ApJ, 757, 31). We find that the absorption minimum for the hydrogen lines is never seen below ~11 000 km s-1 but approaches this value as the lines get weaker. This suggests that the interface between the helium core and hydrogen rich envelope is located near this velocity in agreement with the Bersten et al. (2012) He4R270 ejecta model. Spectral modelling of the hydrogen lines using this ejecta model supports the conclusion and we find a hydrogen mass of 0.01-0.04 M⊙ to be consistent with the observed spectral evolution. We estimate that the photosphere reaches the helium core at 5-7 days whereas the helium lines appear between ~10 and ~15 days, close to the photosphere and then move outward in velocity until ~40 days. This suggests that increasing non-thermal excitation due to decreasing optical depth for the γ-rays is driving the early evolution of these lines. The Spitzer 4.5 μm band shows a significant flux excess, which we attribute to CO fundamental band emission or a thermal dust echo although further work using late time data is needed. Thedistance and in particular the extinction, where we use spectral modelling to put further constraints, is discussed in some detail as well as the sensitivity of the hydrodynamical modelling to errors in these quantities. We also provide and discuss pre- and post-explosion observations of the SN site which shows a reduction by ~75 percent in flux at the position of the yellow supergiant coincident with SN 2011dh. The B, V and r band decline rates of 0.0073, 0.0090 and 0.0053 mag day-1 respectively are consistent with the remaining flux being emitted by the SN. Hence we find that the star was indeed the progenitor of SN 2011dh as previously suggested by Maund et al. (2011, ApJ, 739, L37) and which is also consistent with the results from the hydrodynamical modelling. Figures 2, 3, Tables 3-10, and Appendices are available in electronic form at http://www.aanda.orgThe photometric tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/562/A17

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced