998 resultados para Pulse - Speed - Thesis
Resumo:
Iowa’s speed regulations are based on the same basic speed law that is used in all 50 states: “Any person driving a motor vehicle on a highway shall drive the same at a careful and prudent speed not greater than nor less than is reasonable and proper, having due regard to the traffic, surface, and width of the highway and of any other conditions then existing, and no person shall drive any vehicle upon a highway at a speed greater than will permit the person to bring it to a stop within the assured clear distance ahead, such driver having the right to assume, however, that all persons using said highway will observe the law.” Statutory limits are based on the concept that uniform categories of highways can be traveled safely at certain preset maximum speeds under ideal conditions. Whether the speed limit is posted or unposted, drivers should reduce their speed below these values in poor weather, heavy traffic, and under other potentially hazardous conditions.
Resumo:
This study documents the speed reduction impacts of two dynamic, electronic school zone speed limit signs at United Community Schools between Ames and Boone, Iowa. The school facility is situated along US Highway 30, a rural four-lane divided expressway. Due to concerns about high speeds in the area, the Iowa Department of Transportation (DOT) decided to replace the original static school zone speed limit signs, which had flashing beacons during school start and dismissal times (Figure 1), with electronic speed signs that only display the reduced school speed limit of 55 mph during school arrival and dismissal times (Figure 2). The Center for Transportation Research and Education (CTRE) at Iowa State University (ISU) conducted a speed evaluation study one week before and 1 month, 7 months, and 14 or 15 months after the new signs were installed. Overall, the new dynamic school zone speed limit signs were more effective in reducing speeds than the original static signs with flashing beacons in the 1 month after period. During the 7 and 14 month after period, speeds increased slightly for the eastbound direction of traffic. However, the increases were consistent with overall speed increases that occurred independent of the signs. The dynamic, electronic signs were effective for the westbound direction of traffic for all time periods and for both start and dismissal times. Even though only modest changes in mean and 85th percentile speeds occurred, with the speed decreases, the number of vehicles exceeding the school speed limit decreased significantly, indicating the signs had a significant impact on high-end speeders.
Resumo:
Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.
Resumo:
Body accelerations during human walking were recorded by a portable measuring device. A new method for parameterizing body accelerations and finding the pattern of walking is outlined. Two neural networks were designed to recognize each pattern and estimate the speed and incline of walking. Six subjects performed treadmill walking followed by self-paced walking on an outdoor test circuit involving roads of various inclines. The neural networks were first "trained" by known patterns of treadmill walking. Then the inclines, the speeds, and the distance covered during overground walking (outdoor circuit) were estimated. The results show a good agreement between actual and predicted variables. The standard deviation of estimated incline was less than 2.6% and the maximum of the coefficient of variation of speed estimation is 6%. To the best of our knowledge, these results constitute the first assessment of speed, incline and distance covered during level and slope walking and offer investigators a new tool for assessing levels of outdoor physical activity.
Resumo:
We study the collision of a gravitational wave pulse and a soliton wave on a spatially homogeneous background. This collision is described by an exact solution of Einsteins equations in a vacuum which is generated from a nondiagonal seed by means of a soliton transformation. The effect produced by the soliton on the amplitude and polarization of the wave is considered.
Resumo:
One-dimensional arrays of nonlinear electronic circuits are shown to support propagation of pulses when operating in a locally bistable regime, provided the circuits are under the influence of a global noise. These external random fluctuations are applied to the parameter that controls the transition between bistable and monostable dynamics in the individual circuits. As a result, propagating fronts become destabilized in the presence of noise, and the system self-organizes to allow the transmission of pulses. The phenomenon is also observed in weakly coupled arrays, when propagation failure arises in the absence of noise.
Resumo:
An instrument designed to measure thermal conductivity of consolidated rocks, dry or saturated, using a transient method is presented. The instrument measures relative values of the thermal conductivity, and it needs calibration to obtain absolute values. The device can be used as heat pulse line source and as continuous heat line source. Two parameters to determine thermal conductivity are proposed: TMAX, in heat pulse line source, and SLOPE, in continuous heat line source. Its performance is better, and the operation simpler, in heat pulse line-source mode with a measuring time of 170 s and a reproducibility better than 2.5%. The sample preparation is very simple on both modes. The performance has been tested with a set of ten rocks with thermal conductivity values between 1.4 and 5.2 W m¿1 K¿1 which covers the usual range for consolidated rocks.
Resumo:
AIM: To study the development of motor speed and associated movements in participants aged 5 to 18 years for age, sex, and laterality. METHOD: Ten motor tasks of the Zurich Neuromotor Assessment (repetitive and alternating movements of hands and feet, repetitive and sequential finger movements, the pegboard, static and dynamic balance, diadochokinesis) were administered to 593 right-handed participants (286 males, 307 females). RESULTS: A strong improvement with age was observed in motor speed from age 5 to 10, followed by a levelling-off between 12 and 18 years. Simple tasks and the pegboard matured early and complex tasks later. Simple tasks showed no associated movements beyond early childhood; in complex tasks associated movements persisted until early adulthood. The two sexes differed only marginally in speed, but markedly in associated movements. A significant laterality (p<0.001) in speed was found for all tasks except for static balance; the pegboard was most lateralized, and sequential finger movements least. Associated movements were lateralized only for a few complex tasks. We also noted a substantial interindividual variability. INTERPRETATION: Motor speed and associated movements improve strongly in childhood, weakly in adolescence, and are both of developmental relevance. Because they correlate weakly, they provide complementary information.
Resumo:
The action of individual type II DNA topoisomerases has been followed in real time by observing the elastic response of single DNA molecules to sequential strand passage events. Micromanipulation methods provide a complementary approach to biochemical studies for investigating the mechanism of DNA topoisomerases.
Resumo:
BACKGROUND: Deep burn assessment made by clinical evaluation has an accuracy varying between 60% and 80% and will determine if a burn injury will need tangential excision and skin grafting or if it will be able to heal spontaneously. Laser Doppler Imaging (LDI) techniques allow an improved burn depth assessment but their use is limited by the time-consuming image acquisition which may take up to 6 min per image. METHODS: To evaluate the effectiveness and reliability of a newly developed full-field LDI technology, 15 consecutive patients presenting with intermediate depth burns were assessed both clinically and by FluxExplorer LDI technology. Comparison between the two methods of assessment was carried out. RESULTS: Image acquisition was done within 6 s. FluxEXPLORER LDI technology achieved a significantly improved accuracy of burn depth assessment compared to the clinical judgement performed by board certified plastic and reconstructive surgeons (P < 0.05, 93% of correctly assessed burns injuries vs. 80% for clinical assessment). CONCLUSION: Technological improvements of LDI technology leading to a decreased image acquisition time and reliable burn depth assessment allow the routine use of such devices in the acute setting of burn care without interfering with the patient's treatment. Rapid and reliable LDI technology may assist clinicians in burn depth assessment and may limit the morbidity of burn patients through a minimization of the area of surgical debridement. Future technological improvements allowing the miniaturization of the device will further ease its clinical application.
Resumo:
We study the collision of a gravitational wave pulse and a soliton wave on a spatially homogeneous background. This collision is described by an exact solution of Einsteins equations in a vacuum which is generated from a nondiagonal seed by means of a soliton transformation. The effect produced by the soliton on the amplitude and polarization of the wave is considered.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
We analyze the collective behavior of a lattice model of pulse-coupled oscillators. By means of computer simulations we find the relation between the intrinsic dynamics of each member of the population and their mutual interactions that ensures, in a general context, the existence of a fully synchronized regime. This condition turns out to be the same as that obtained for the globally coupled population. When the condition is not completely satisfied we find different spatial structures. This also gives some hints about self-organized criticality.