967 resultados para Measurement range
Resumo:
Previous research on pavement markings from a safety perspective tackled various issues such as pavement marking retroreflectivity variability, relationship between pavement marking retroreflectivity and driver visibility, or pavement marking improvements and safety. A recent research interest in this area has been to find a correlation between retroreflectivity and crashes, but a significant statistical relationship has not yet been found. This study investigates such a possible statistical relationship by analyzing five years of pavement marking retroreflectivity data collected by the Iowa Department of Transportation (DOT) on all state primary roads and corresponding crash and traffic data. This study developed a spatial-temporal database using measured retroreflectivity data to account for the deterioration of pavement markings over time along with statewide crash data to attempt to quantify a relationship between crash occurrence probability and pavement marking retroreflectivity. First, logistic regression analyses were done for the whole data set to find a statistical relationship between crash occurrence probability and identified variables, which are road type, line type, retroreflectivity, and traffic (vehicle miles traveled). The analysis looked into subsets of the data set such as road type, retroreflectivity measurement source, high crash routes, retroreflectivity range, and line types. Retroreflectivity was found to have a significant effect in crash occurrence probability for four data subsets—interstate, white edge line, yellow edge line, and yellow center line data. For white edge line and yellow center line data, crash occurrence probability was found to increase by decreasing values of retroreflectivity.
Resumo:
The nucleon spectral function in nuclear matter fulfills an energy weighted sum rule. Comparing two different realistic potentials, these sum rules are studied for Greens functions that are derived self-consistently within the T matrix approximation at finite temperature.
Resumo:
Accurately calibrated effective field theories are used to compute atomic parity nonconserving (APNC) observables. Although accurately calibrated, these effective field theories predict a large spread in the neutron skin of heavy nuclei. Whereas the neutron skin is strongly correlated to numerous physical observables, in this contribution we focus on its impact on new physics through APNC observables. The addition of an isoscalar-isovector coupling constant to the effective Lagrangian generates a wide range of values for the neutron skin of heavy nuclei without compromising the success of the model in reproducing well-constrained nuclear observables. Earlier studies have suggested that the use of isotopic ratios of APNC observables may eliminate their sensitivity to atomic structure. This leaves nuclear structure uncertainties as the main impediment for identifying physics beyond the standard model. We establish that uncertainties in the neutron skin of heavy nuclei are at present too large to measure isotopic ratios to better than the 0.1% accuracy required to test the standard model. However, we argue that such uncertainties will be significantly reduced by the upcoming measurement of the neutron radius in 208^Pb at the Jefferson Laboratory.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
The distribution of single-particle strength in nuclear matter is calculated for a realistic nucleon-nucleon interaction. The influence of the short-range repulsion and the tensor component of the nuclear force on the spectral functions is to move approximately 13% of the total strength for all single-particle states beyond 100 MeV into the particle domain. This result is related to the abundantly observed quenching phenomena in nuclei which include the reduction of spectroscopic factors observed in (e,ep) reactions and the missing strength in low energy response functions.
Resumo:
Vibration-based damage identification (VBDI) techniques have been developed in part to address the problems associated with an aging civil infrastructure. To assess the potential of VBDI as it applies to highway bridges in Iowa, three applications of VBDI techniques were considered in this study: numerical simulation, laboratory structures, and field structures. VBDI techniques were found to be highly capable of locating and quantifying damage in numerical simulations. These same techniques were found to be accurate in locating various types of damage in a laboratory setting with actual structures. Although there is the potential for these techniques to quantify damage in a laboratory setting, the ability of the methods to quantify low-level damage in the laboratory is not robust. When applying these techniques to an actual bridge, it was found that some traditional applications of VBDI methods are capable of describing the global behavior of the structure but are most likely not suited for the identification of typical damage scenarios found in civil infrastructure. Measurement noise, boundary conditions, complications due to substructures and multiple material types, and transducer sensitivity make it very difficult for present VBDI techniques to identify, much less quantify, highly localized damage (such as small cracks and minor changes in thickness). However, while investigating VBDI techniques in the field, it was found that if the frequency-domain response of the structure can be generated from operating traffic load, the structural response can be animated and used to develop a holistic view of the bridge’s response to various automobile loadings. By animating the response of a field bridge, concrete cracking (in the abutment and deck) was correlated with structural motion and problem frequencies (i.e., those that cause significant torsion or tension-compression at beam ends) were identified. Furthermore, a frequency-domain study of operational traffic was used to identify both common and extreme frequencies for a given structure and loading. Common traffic frequencies can be compared to problem frequencies so that cost-effective, preventative solutions (either structural or usage-based) can be developed for a wide range of IDOT bridges. Further work should (1) perfect the process of collecting high-quality operational frequency response data; (2) expand and simplify the process of correlating frequency response animations with damage; and (3) develop efficient, economical, preemptive solutions to common damage types.
Resumo:
We analyze the constraints on the mass and mixing of a superstring-inspired E6 Z' neutral gauge boson that follow from the recent precise Z mass measurements and show that they depend very sensitively on the assumed value of the W mass and also, to a lesser extent, on the top-quark mass.
Resumo:
PURPOSE: Awareness of being monitored can influence participants' habitual physical activity (PA) behavior. This reactivity effect may threaten the validity of PA assessment. Reports on reactivity when measuring the PA of children and adolescents have been inconsistent. The aim of this study was to investigate whether PA outcomes measured by accelerometer devices differ from measurement day to measurement day and whether the day of the week and the day on which measurement started influence these differences. METHODS: Accelerometer data (counts per minute [cpm]) of children and adolescents (n = 2081) pooled from eight studies in Switzerland with at least 10 h of daily valid recording were investigated for effects of measurement day, day of the week, and start day using mixed linear regression. RESULTS: The first measurement day was the most active day. Counts per minute were significantly higher than on the second to the sixth day, but not on the seventh day. Differences in the age-adjusted means between the first and consecutive days ranged from 23 to 45 cpm (3.6%-7.1%). In preschoolchildren, the differences almost reached 10%. The start day significantly influenced PA outcome measures. CONCLUSIONS: Reactivity to accelerometer measurement of PA is likely to be present to an extent of approximately 5% on the first day and may introduce a relevant bias to accelerometer-based studies. In preschoolchildren, the effects are larger than those in elementary and secondary schoolchildren. As the day of the week and the start day significantly influence PA estimates, researchers should plan for at least one familiarization day in school-age children and randomly assign start days.
Resumo:
Measurement of the blood pressure by the physician remains an essential step in the evaluation of cardiovascular risk. Ambulatory measurement and self-measurement of blood pressure are ways of counteracting the "white coat" effect which is the rise in blood pressure many patients experience in the presence of doctors. Thus, it is possible to define the cardiovascular risk of hypertension and identify the patients with the greatest chance of benefiting from antihypertensive therapy. However, it must be realised that normotensive subjects during their everyday activities and becoming hypertensive in the doctor's surgery, may become hypertensive with time, irrespective of the means used to measure blood pressure. These patients should be followed up regularly even if the decision to treat has been postponed.
Resumo:
The interaction of the low-lying pseudoscalar mesons with the ground-state baryons in the charm sector is studied within a coupled-channel approach using a t-channel vector-exchange driving force. The amplitudes describing the scattering of the pseudoscalar mesons off the ground-state baryons are obtained by solving the Lippmann-Schwinger equation. We analyze in detail the effects of going beyond the t=0 approximation. Our model predicts the dynamical generation of several open-charm baryon resonances in different isospin and strangeness channels, some of which can be clearly identified with recently observed states.
Resumo:
We present the dynamic velocity profiles of a Newtonian fluid (glycerol) and a viscoelastic Maxwell fluid (CPyCl-NaSal in water) driven by an oscillating pressure gradient in a vertical cylindrical pipe. The frequency range explored has been chosen to include the first three resonance peaks of the dynamic permeability of the viscoelastic-fluid¿pipe system. Three different optical measurement techniques have been employed. Laser Doppler anemometry has been used to measure the magnitude of the velocity at the center of the liquid column. Particle image velocimetry and optical deflectometry are used to determine the velocity profiles at the bulk of the liquid column and at the liquid-air interface respectively. The velocity measurements in the bulk are in good agreement with the theoretical predictions of a linear theory. The results, however, show dramatic differences in the dynamic behavior of Newtonian and viscoelastic fluids, and demonstrate the importance of resonance phenomena in viscoelastic fluid flows, biofluids in particular, in confined geometries.
Resumo:
Gradients of variation-or clines-have always intrigued biologists. Classically, they have been interpreted as the outcomes of antagonistic interactions between selection and gene flow. Alternatively, clines may also establish neutrally with isolation by distance (IBD) or secondary contact between previously isolated populations. The relative importance of natural selection and these two neutral processes in the establishment of clinal variation can be tested by comparing genetic differentiation at neutral genetic markers and at the studied trait. A third neutral process, surfing of a newly arisen mutation during the colonization of a new habitat, is more difficult to test. Here, we designed a spatially explicit approximate Bayesian computation (ABC) simulation framework to evaluate whether the strong cline in the genetically based reddish coloration observed in the European barn owl (Tyto alba) arose as a by-product of a range expansion or whether selection has to be invoked to explain this colour cline, for which we have previously ruled out the actions of IBD or secondary contact. Using ABC simulations and genetic data on 390 individuals from 20 locations genotyped at 22 microsatellites loci, we first determined how barn owls colonized Europe after the last glaciation. Using these results in new simulations on the evolution of the colour phenotype, and assuming various genetic architectures for the colour trait, we demonstrate that the observed colour cline cannot be due to the surfing of a neutral mutation. Taking advantage of spatially explicit ABC, which proved to be a powerful method to disentangle the respective roles of selection and drift in range expansions, we conclude that the formation of the colour cline observed in the barn owl must be due to natural selection.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
A numerical study of Brownian motion of noninteracting particles in random potentials is presented. The dynamics are modeled by Langevin equations in the high friction limit. The random potentials are Gaussian distributed and short ranged. The simulations are performed in one and two dimensions. Different dynamical regimes are found and explained. Effective subdiffusive exponents are obtained and commented on.