82 resultados para TRANSVERSE-MOMENTUM DISTRIBUTIONS
Resumo:
The coupling between topography, waves and currents in the surf zone may selforganize to produce the formation of shore-transverse or shore-oblique sand bars on an otherwise alongshore uniform beach. In the absence of shore-parallel bars, this has been shown by previous studies of linear stability analysis, but is now extended to the finite-amplitude regime. To this end, a nonlinear model coupling wave transformation and breaking, a shallow-water equations solver, sediment transport and bed updating is developed. The sediment flux consists of a stirring factor multiplied by the depthaveraged current plus a downslope correction. It is found that the cross-shore profile of the ratio of stirring factor to water depth together with the wave incidence angle primarily determine the shape and the type of bars, either transverse or oblique to the shore. In the latter case, they can open an acute angle against the current (upcurrent oriented) or with the current (down-current oriented). At the initial stages of development, both the intensity of the instability which is responsible for the formation of the bars and the damping due to downslope transport grow at a similar rate with bar amplitude, the former being somewhat stronger. As bars keep on growing, their finite-amplitude shape either enhances downslope transport or weakens the instability mechanism so that an equilibrium between both opposing tendencies occurs, leading to a final saturated amplitude. The overall shape of the saturated bars in plan view is similar to that of the small-amplitude ones. However, the final spacings may be up to a factor of 2 larger and final celerities can also be about a factor of 2 smaller or larger. In the case of alongshore migrating bars, the asymmetry of the longshore sections, the lee being steeper than the stoss, is well reproduced. Complex dynamics with merging and splitting of individual bars sometimes occur. Finally, in the case of shore-normal incidence the rip currents in the troughs between the bars are jet-like while the onshore return flow is wider and weaker as is observed in nature.
Resumo:
The formation and development of transverse and crescentic sand bars in the coastal marine environment has been investigated by means of a nonlinear numerical model based on the shallow-water equations and on a simpli ed sediment transport parameterization. By assuming normally approaching waves and a saturated surf zone, rhythmic patterns develop from a planar slope where random perturbations of small amplitude have been superimposed. Two types of bedforms appear: one is a crescentic bar pattern centred around the breakpoint and the other, herein modelled for the rst time, is a transverse bar pattern. The feedback mechanism related to the formation and development of the patterns can be explained by coupling the water and sediment conservation equations. Basically, the waves stir up the sediment and keep it in suspension with a certain cross-shore distribution of depth-averaged concentration. Then, a current flowing with (against) the gradient of sediment concentration produces erosion (deposition). It is shown that inside the surf zone, these currents may occur due to the wave refraction and to the redistribution of wave breaking produced by the growing bedforms. Numerical simulations have been performed in order to understand the sensitivity of the pattern formation to the parameterization and to relate the hydro-morphodynamic input conditions to which of the patterns develops. It is suggested that crescentic bar growth would be favoured by high-energy conditions and ne sediment while transverse bars would grow for milder waves and coarser sediment. In intermediate conditions mixed patterns may occur.
Resumo:
This paper introduces a mixture model based on the beta distribution, without preestablishedmeans and variances, to analyze a large set of Beauty-Contest data obtainedfrom diverse groups of experiments (Bosch-Domenech et al. 2002). This model gives a bettert of the experimental data, and more precision to the hypothesis that a large proportionof individuals follow a common pattern of reasoning, described as iterated best reply (degenerate),than mixture models based on the normal distribution. The analysis shows thatthe means of the distributions across the groups of experiments are pretty stable, while theproportions of choices at dierent levels of reasoning vary across groups.
Resumo:
This correspondence studies the formulation of members ofthe Cohen-Posch class of positive time-frequency energy distributions.Minimization of cross-entropy measures with respect to different priorsand the case of no prior or maximum entropy were considered. It isconcluded that, in general, the information provided by the classicalmarginal constraints is very limited, and thus, the final distributionheavily depends on the prior distribution. To overcome this limitation,joint time and frequency marginals are derived based on a "directioninvariance" criterion on the time-frequency plane that are directly relatedto the fractional Fourier transform.
Resumo:
In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. The vast majority of the literature on the subject deals with affine normalization. We argue that more general normalizations are natural from a mathematical and physical point of view and work them out. The problem is approached using the language of renormalization-group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of its differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.
Resumo:
The GS-distribution is a family of distributions that provide an accurate representation of any unimodal univariate continuous distribution. In this contribution we explore the utility of this family as a general model in survival analysis. We show that the survival function based on the GS-distribution is able to provide a model for univariate survival data and that appropriate estimates can be obtained. We develop some hypotheses tests that can be used for checking the underlying survival model and for comparing the survival of different groups.
Resumo:
We clarify some issues related to the evaluation of the mean value of the energy-momentum tensor for quantum scalar fields coupled to the dilaton field in two-dimensional gravity. Because of this coupling, the energy-momentum tensor for matter is not conserved and therefore it is not determined by the trace anomaly. We discuss different approximations for the calculation of the energy-momentum tensor and show how to obtain the correct amount of Hawking radiation. We also compute cosmological particle creation and quantum corrections to the Newtonian potential.
Resumo:
The production of φ mesons in proton collisions with C, Cu, Ag, and Au targets has been studied via the φ → K + K − decay at an incident beam energy of 2.83 GeV using the ANKE detector system at COSY. For the first time, the momentum dependence of the nuclear transparency ratio, the in-medium φ width, and the differential cross section for φ -meson production at forward angles have been determined for these targets over the momentum range of 0.6-1.6 GeV /c. There are indications of a significant momentum dependence in the value of the extracted φ width, which corresponds to an effective φN absorption cross section in the range of 14-21 mb.
Resumo:
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Resumo:
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.
Resumo:
We develop a method for generating focused vector beams with circular polarization at any transverse plane. Based on the Richards-Wolf vector model, we derive analytical expressions to describe the propagation of these set of beams near the focal area. Since the polarization and the amplitude of the input beam are not uniform, an interferometric system capable of generating spatially-variant polarized beams has to be used. In particular, this wavefront is manipulated by means of spatial light modulators displaying computer generated holograms and subsequently focused using a high numerical aperture objective lens. Experimental results using a NA=0.85 system are provided: irradiance and Stokes images of the focused field at different planes near the focal plane are presented and compared with those obtained by numerical simulation.
Resumo:
One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.
Resumo:
A comparison is established between the contributions of transverse and longitudinal components of both the propagating and the evanescent waves associated to freely propagating radially polarized nonparaxial beams. Attention is focused on those fields that remain radially polarized upon propagation. In terms of the plane-wave angular spectrum of these fields, analytical expressions are given for determining both the spatial shape of the above components and their relative weight integrated over the whole transverse plane. The results are applied to two kinds of doughnut-like beams with radial polarization, and we compare the behavior of such fields at two transverse planes.
Resumo:
Many European states apply score systems to evaluate the disability severity of non-fatal motor victims under the law of third-party liability. The score is a non-negative integer with an upper bound at 100 that increases with severity. It may be automatically converted into financial terms and thus also reflects the compensation cost for disability. In this paper, discrete regression models are applied to analyze the factors that influence the disability severity score of victims. Standard and zero-altered regression models are compared from two perspectives: an interpretation of the data generating process and the level of statistical fit. The results have implications for traffic safety policy decisions aimed at reducing accident severity. An application using data from Spain is provided.
Resumo:
This study examined the independent effect of skewness and kurtosis on the robustness of the linear mixed model (LMM), with the Kenward-Roger (KR) procedure, when group distributions are different, sample sizes are small, and sphericity cannot be assumed. Methods: A Monte Carlo simulation study considering a split-plot design involving three groups and four repeated measures was performed. Results: The results showed that when group distributions are different, the effect of skewness on KR robustness is greater than that of kurtosis for the corresponding values. Furthermore, the pairings of skewness and kurtosis with group size were found to be relevant variables when applying this procedure. Conclusions: With sample sizes of 45 and 60, KR is a suitable option for analyzing data when the distributions are: (a) mesokurtic and not highly or extremely skewed, and (b) symmetric with different degrees of kurtosis. With total sample sizes of 30, it is adequate when group sizes are equal and the distributions are: (a) mesokurtic and slightly or moderately skewed, and sphericity is assumed; and (b) symmetric with a moderate or high/extreme violation of kurtosis. Alternative analyses should be considered when the distributions are highly or extremely skewed and samples sizes are small.