948 resultados para Static Nonlinearity
Resumo:
Biological reference points are important tools for fisheries management. Reference points are not static, but may change when a population's environment or the population itself changes. Fisheries-induced evolution is one mechanism that can alter population characteristics, leading to "shifting" reference points by modifying the underlying biological processes or by changing the perception of a fishery system. The former causes changes in "true" reference points, whereas the latter is caused by changes in the yardsticks used to quantify a system's status. Unaccounted shifts of either kind imply that reference points gradually lose their intended meaning. This can lead to increased precaution, which is safe, but potentially costly. Shifts can also occur in more perilous directions, such that actual risks are greater than anticipated. Our qualitative analysis suggests that all commonly used reference points are susceptible to shifting through fisheries-induced evolution, including the limit and "precautionary" reference points for spawning-stock biomass, Blim and Bpa, and the target reference point for fishing mortality, F0.1. Our findings call for increased awareness of fisheries-induced changes and highlight the value of always basing reference points on adequately updated information, to capture all changes in the biological processes that drive fish population dynamics.
Resumo:
The treatment of the recently ruptured Achilles tendon can be conservative or surgical. The conservative treatment may be carried out using either a static cast immobilisation or using a dynamic brace and an early functional rehabilitation. The surgical technique can be either open or mini-invasive. Neglected and ancient ruptures may need to be treated surgically by a tendinoplasty. There is an ongoing discussion about how to manage the recently ruptured Achilles tendon, especially since recent descriptions of conservative-functional treatment procedures and mini-invasive surgical techniques. We present the choice of the different treatment options and the clinical reasoning to identify the best adapted treatment for the individual patient. The ideal treatment option depends on the functional demand and the medical condition of the patient.
Resumo:
The most evident symptoms of schizophrenia are severe impairment of cognitive functions like attention, abstract reasoning and working memory. The latter has been defined as the ability to maintain and manipulate on-line a limited amount of information. Whereas several studies show that working memory processes are impaired in schizophrenia, the specificity of this deficit is still unclear. Results obtained with a new paradigm, involving visuospatial, dynamic and static working memory processing, suggest that schizophrenic patients rely on a specific compensatory strategy. An animal model of schizophrenia with a transient deficit in glutathione during the development reveals similar substitutive processing, masking the impairment in working memory functions in specific test conditions only. Taken together, these results show coherence between working memory deficits in schizophrenic patients and in animal models. More generally, it is possible to consider that the pathological state may be interpreted as a reduced homeostatic reserve. However, this may be balanced in specific situations by efficient allostatic strategies. Thus, the pathological condition would remain latent in several situations, due to such allostatic regulations. However, to maintain a performance based on highly specific strategies requires in turn specific conditions, limitating adaptative resources in humans and in animals. In summary, we suggest that the psychological and physical load to maintain this rigid allostatic state is very high in patients and animal subjects.
Resumo:
Tämä insinöörityö tehtiin ABB Oy, Drivesin Product AC -tulosyksikön tuotekehitysosastolle Helsingissä. Työssä kehitettiin taajuusmuuttajien suorituskyvyn automaattinen testausympäristö. ABB:n taajuusmuuttajien suorituskykytestejä ei ole aikaisemmin automatisoitu. Testit on tehty käsin ja niiden suorittamiseen ja tulosten käsittelyyn on kulunut paljon aikaa. Automaattisella testauksella pyrittiin testien suorittamiseen ja tulosten käsittelyyn kuluvan ajan huomattavaan pienentymiseen. Työssä ei ollut tarkoituksena tehdä suorituskykytestejä vaan kehittää automaattinen testausympäristö eli suorituskykytestipenkki, jossa suorituskykytestit on mahdollista suorittaa. Työssä keskityttiin taajuusmuuttajan nopeus- ja momenttisäätäjien suorituskykyyn. Työ toteutettiin suunnittelu- ja ohjelmointityönä. Testausympäristön laitteisto perustuu ABB:n tuotekehityslaboratorioiden olemassaoleviin testipaikkoihin. Testausympäristössä käytetään taajuusmuuttajien lisäksi pääasiassa kolmivaiheisia oikosulkumoottoreita. Lisäksi laitteistoon kuuluu ACS800-sarjan taajuusmuuttaja kuormakäyttönä, momenttianturi ja takometri eli kierrosnopeusmittari. Ohjelmointi tehtiin National Instrumentsin LabVIEW-ohjelmointiympäristön versiolla 8.0. Testausympäristön käyttöliittymänä toimii saman yrityksen TestStand-testausohjelmiston versio 3.5. Testattavien taajuusmuuttajien ohjausta ja momenttianturin lukemista varten ohjelmoitiin virtuaali-instrumentteja. Virtuaali-instrumentteja kutsutaan TestStand-testisekvensseistä. Testisekvenssit luodaan TestStandin sekvenssieditorilla ja suoritetaan sekvenssieditorissa tai operaattorin käyttöliittymässä. Työn tuloksena syntyi taajuusmuuttajien suorituskyvyn automaattinen testausympäristö. Testausympäristöä voidaan hyödyntää sekä nykyisen että seuraavan sukupolven taajuusmuuttajien testauksessa. Sillä on mahdollista suorittaa yleisimmät taajuusmuuttajien suorituskykytestit, kuten nopeus- ja momenttisäätöjen staattinen ja dynaaminen tarkkuus, hyvin kattavasti. Testit voidaan automaattisesti suorittaa koko testikäytön sallimalla pyörimisnopeus- ja kuormitusalueella. Näytteenottotaajuus voi olla enintään 1 kHz luettaessa pyörimisnopeutta ACS800-sarjan taajuusmuuttajan kautta ja momenttianturia samanaikaisesti. Virtuaali-instrumenteista koostuvia testisekvenssejä voidaan vapaasti muokata ja kehittää testejä edelleen tai luoda kokonaan uusia testejä. Testausympäristö perustuu teollisuudessa yleisesti käytettyihin ohjelmistoihin ja tarjoaa hyvät mahdollisuudet jatkokehitykselle.
Resumo:
Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.
Resumo:
Childhood obesity and physical inactivity are increasing dramatically worldwide. Children of low socioeconomic status and/or children of migrant background are especially at risk. In general, the overall effectiveness of school-based programs on health-related outcomes has been disappointing. A special gap exists for younger children and in high risk groups. This paper describes the rationale, design, curriculum, and evaluation of a multicenter preschool randomized intervention study conducted in areas with a high migrant population in two out of 26 Swiss cantons. Twenty preschool classes in the German (canton St. Gallen) and another 20 in the French (canton Vaud) part of Switzerland were separately selected and randomized to an intervention and a control arm by the use of opaque envelopes. The multidisciplinary lifestyle intervention aimed to increase physical activity and sleep duration, to reinforce healthy nutrition and eating behaviour, and to reduce media use. According to the ecological model, it included children, their parents and the teachers. The regular teachers performed the majority of the intervention and were supported by a local health promoter. The intervention included physical activity lessons, adaptation of the built infrastructure; promotion of regional extracurricular physical activity; playful lessons about nutrition, media use and sleep, funny homework cards and information materials for teachers and parents. It lasted one school year. Baseline and post-intervention evaluations were performed in both arms. Primary outcome measures included BMI and aerobic fitness (20 m shuttle run test). Secondary outcomes included total (skinfolds, bioelectrical impedance) and central (waist circumference) body fat, motor abilities (obstacle course, static and dynamic balance), physical activity and sleep duration (accelerometry and questionnaires), nutritional behaviour and food intake, media use, quality of life and signs of hyperactivity (questionnaires), attention and spatial working memory ability (two validated tests). Researchers were blinded to group allocation. The purpose of this paper is to outline the design of a school-based multicenter cluster randomized, controlled trial aiming to reduce body mass index and to increase aerobic fitness in preschool children in culturally different parts of Switzerland with a high migrant population. Trial Registration: (clinicaltrials.gov) NCT00674544.
Resumo:
BACKGROUND: Obesity is becoming more frequent in children; understanding the extent to which this condition affects not only carbohydrate and lipid metabolism but also protein metabolism is of paramount importance. OBJECTIVE: We evaluated the kinetics of protein metabolism in obese, prepubertal children in the static phase of obesity. DESIGN: In this cross-sectional study, 9 obese children (x +/- SE: 44+/-4 kg, 30.9+/-1.5% body fat) were compared with 8 lean (28+/-2 kg ,16.8+/-1.2% body fat), age-matched (8.5+/-0.2 y) control children. Whole-body nitrogen flux, protein synthesis, and protein breakdown were calculated postprandially over 9 h from 15N abundance in urinary ammonia by using a single oral dose of [15N]glycine; resting energy expenditure (REE) was assessed by indirect calorimetry (canopy) and body composition by multiple skinfold-thickness measurements. RESULTS: Absolute rates of protein synthesis and breakdown were significantly greater in obese children than in control children (x +/- SE: 208+/-24 compared with 137+/-14 g/d, P < 0.05, and 149+/-20 compared with 89+/-13 g/d, P < 0.05, respectively). When these variables were adjusted for fat-free mass by analysis of covariance, however, the differences between groups disappeared. There was a significant relation between protein synthesis and fat-free mass (r = 0.83, P < 0.001) as well as between protein synthesis and REE (r = 0.79, P < 0.005). CONCLUSIONS: Obesity in prepubertal children is associated with an absolute increase in whole-body protein turnover that is consistent with an absolute increase in fat-free mass, both of which contribute to explaining the greater absolute REE in obese children than in control children.
Resumo:
The methodology for generating a homology model of the T1 TCR-PbCS-K(d) class I major histocompatibility complex (MHC) class I complex is presented. The resulting model provides a qualitative explanation of the effect of over 50 different mutations in the region of the complementarity determining region (CDR) loops of the T cell receptor (TCR), the peptide and the MHC's alpha(1)/alpha(2) helices. The peptide is modified by an azido benzoic acid photoreactive group, which is part of the epitope recognized by the TCR. The construction of the model makes use of closely related homologs (the A6 TCR-Tax-HLA A2 complex, the 2C TCR, the 14.3.d TCR Vbeta chain, the 1934.4 TCR Valpha chain, and the H-2 K(b)-ovalbumine peptide), ab initio sampling of CDR loops conformations and experimental data to select from the set of possibilities. The model shows a complex arrangement of the CDR3alpha, CDR1beta, CDR2beta and CDR3beta loops that leads to the highly specific recognition of the photoreactive group. The protocol can be applied systematically to a series of related sequences, permitting the analysis at the structural level of the large TCR repertoire specific for a given peptide-MHC complex.
Resumo:
The large spatial inhomogeneity in transmit B(1) field (B(1)(+)) observable in human MR images at high static magnetic fields (B(0)) severely impairs image quality. To overcome this effect in brain T(1)-weighted images, the MPRAGE sequence was modified to generate two different images at different inversion times, MP2RAGE. By combining the two images in a novel fashion, it was possible to create T(1)-weighted images where the result image was free of proton density contrast, T(2) contrast, reception bias field, and, to first order, transmit field inhomogeneity. MP2RAGE sequence parameters were optimized using Bloch equations to maximize contrast-to-noise ratio per unit of time between brain tissues and minimize the effect of B(1)(+) variations through space. Images of high anatomical quality and excellent brain tissue differentiation suitable for applications such as segmentation and voxel-based morphometry were obtained at 3 and 7 T. From such T(1)-weighted images, acquired within 12 min, high-resolution 3D T(1) maps were routinely calculated at 7 T with sub-millimeter voxel resolution (0.65-0.85 mm isotropic). T(1) maps were validated in phantom experiments. In humans, the T(1) values obtained at 7 T were 1.15+/-0.06 s for white matter (WM) and 1.92+/-0.16 s for grey matter (GM), in good agreement with literature values obtained at lower spatial resolution. At 3 T, where whole-brain acquisitions with 1 mm isotropic voxels were acquired in 8 min, the T(1) values obtained (0.81+/-0.03 s for WM and 1.35+/-0.05 for GM) were once again found to be in very good agreement with values in the literature.
Resumo:
Secondary accident statistics can be useful for studying the impact of traffic incident management strategies. An easy-to-implement methodology is presented for classifying secondary accidents using data fusion of a police accident database with intranet incident reports. A current method for classifying secondary accidents uses a static threshold that represents the spatial and temporal region of influence of the primary accident, such as two miles and one hour. An accident is considered secondary if it occurs upstream from the primary accident and is within the duration and queue of the primary accident. However, using the static threshold may result in both false positives and negatives because accident queues are constantly varying. The methodology presented in this report seeks to improve upon this existing method by making the threshold dynamic. An incident progression curve is used to mark the end of the queue throughout the entire incident. Four steps in the development of incident progression curves are described. Step one is the processing of intranet incident reports. Step two is the filling in of incomplete incident reports. Step three is the nonlinear regression of incident progression curves. Step four is the merging of individual incident progression curves into one master curve. To illustrate this methodology, 5,514 accidents from Missouri freeways were analyzed. The results show that secondary accidents identified by dynamic versus static thresholds can differ by more than 30%.
Resumo:
Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.
Resumo:
I show that intellectual property rights yield static efficiency gains, irrespective oftheir dynamic role in fostering innovation. I develop a property-rights model of firmorganization with two dimensions of non-contractible investment. In equilibrium, thefirst best is attained if and only if ownership of tangible and intangible assets is equallyprotected. If IP rights are weaker, firm structure is distorted and efficiency declines:the entrepreneur must either integrate her suppliers, which prompts a decline in theirinvestment; or else risk their defection, which entails a waste of her human capital. Mymodel predicts greater prevalence of vertical integration where IP rights are weaker,and a switch from integration to outsourcing over the product cycle. Both empiricalpredictions are consistent with evidence on multinational companies. As a normativeimplication, I find that IP rights should be strong but narrowly defined, to protect abusiness without holding up its potential spin-offs.
Resumo:
With the quickening pace of crash reporting, the statistical editing of data on a weekly basis, and the ability to provide working databases to users at CTRE/Iowa Traffic Safety Data Service, the University of Iowa, and the Iowa DOT, databases that would be considered incomplete by past standards of static data files are in “public use” even as the dynamic nature of the central DOT database allows changes to be made to both the aggregate of data and to the individual crashes already reported. Moreover, “definitive” analyses of serious crashes will, by their nature, lag seriously behind the preliminary data files. Even after these analyses, the dynamic nature of the mainframe data file means that crash numbers can continue to change long after the incident year. The Iowa DOT, its Office of Driver Services (the “data owner”), and institutional data users/distributors must establish data use, distribution, and labeling protocols to deal with the new, dynamic nature of data. In order to set these protocols, data must be collected concerning the magnitude of difference between database records and crash narratives and diagrams. This study determines the difference between database records and crash narratives for the Iowa Department of Transportation’s Office of Traffic and Safety crash database and the impacts of this difference.
Resumo:
THESIS ABSTRACT Nucleation and growth of metamorphic minerals are the consequence of changing P-T-X-conditions. The thesis presented here focuses on processes governing nucleation and growth of minerals in contact metamorphic environments using a combination of geochemical analytics (chemical-, isotope-, and trace element composition), statistical treatments of spatial data, and numerical models. It is shown, that a combination of textural modeling and stable isotope analysis allows a distinction between several possible reaction paths for olivine growth in a siliceous dolomite contact aureole. It is suggested that olivine forms directly from dolomite and quartz. The formation of olivine from this metastable reaction implies metamorphic crystallization far from equilibrium. As a major consequence, the spatial distribution of metamorphic mineral assemblages in a contact aureole cannot be interpreted as a proxy for the temporal evolution of a single rock specimen, because each rock undergoes a different reaction path, depending on temperature, heating rate, and fluid-infiltration rate. A detailed calcite-dolomite thermometry study was initiated on multiple scales ranging from aureole scale to the size of individual crystals. Quantitative forward models were developed to evaluate the effect of growth zoning, volume diffusion and the formation of submicroscopic exsolution lamellae (<1 µm) on the measured Mg-distribution in individual calcite crystals and compare the modeling results to field data. This study concludes that Mg-distributions in calcite grains of the Ubehebe Peak contact aureole are the consequence of rapid crystal growth in combination with diffusion and exsolution. The crystallization history of a rock is recorded in the chemical composition, the size and the distribution of its minerals. Near the Cima Uzza summit, located in the southern Adamello massif (Italy), contact metamorphic brucite bearing dolomite marbles are exposed as xenoliths surrounded by mafic intrusive rocks. Brucite is formed retrograde pseudomorphing spherical periclase crystals. Crystal size distributions (CSD's) of brucite pseudomorphs are presented for two profiles and combined with geochemistry data and petrological information. Textural analyses are combined with geochemistry data in a qualitative model that describes the formation periclase. As a major outcome, this expands the potential use of CSD's to systems of mineral formation driven by fluid-infiltration. RESUME DE LA THESE La nucléation et la croissance des minéraux métamorphiques sont la conséquence de changements des conditions de pression, température et composition chimique du système (PT-X). Cette thèse s'intéresse aux processus gouvernant la nucléation et la croissance des minéraux au cours d'un épisode de métamorphisme de contact, en utilisant la géochimie analytique (composition chimique, isotopique et en éléments traces), le traitement statistique des données spatiales et la modélisation numérique. Il est montré que la combinaison d'un modèle textural avec des analyses en isotopes stables permet de distinguer plusieurs chemins de réactions possibles conduisant à la croissance de l'olivine dans une auréole de contact riche en Silice et dolomite. Il est suggéré que l'olivine se forme directement à partir de la dolomie et du quartz. Cette réaction métastable de formation de l'olivine implique une cristallisation métamorphique loin de l'équilibre. La principale conséquence est que la distribution spatiale des assemblages de minéraux métamorphiques dans une auréole de contact ne peut pas être considérée comme un témoin de l'évolution temporelle d'un type de roche donné, puisque chaque type de roche suit différents chemins de réactions, en fonction de la température, la vitesse de réchauffement et le taux d'infiltration du fluide. Une étude thermométrique calcite-dolomite détaillée a été réalisée à diverses échelles, depuis l'échelle de l'auréole de contact jusqu'à l'échelle du cristal. Des modèles numériques quantitatifs ont été développés pour évaluer l'effet des zonations de croissance, de la diffusion volumique et de la formation de lamelles d'exsolution submicroscopiques (<1µm) sur la distribution du magnésium mesuré dans des cristaux de calcite individuels. Les résultats de ce modèle ont été comparés ä des échantillons naturels. Cette étude montre que la distribution du Mg dans les grains de calcite de l'auréole de contact de l'Ubehebe Peak (USA) résulte d'une croissance cristalline rapide, associée aux processus de diffusion et d'exsolution. L'histoire de cristallisation d'une roche est enregistrée dans la composition chimique, la taille et la distribution de ses minéraux. Près du sommet Cima Uzza situé au sud du massif d'Adamello (Italie), des marbres dolomitiques à brucite du métamorphisme de contact forment des xénolithes dans une intrusion mafique. La brucite constitue des pseudomorphes rétrogrades du périclase. Les distributions de taille des cristaux (CSD) des pseudomorphes de brucite sont présentées pour deux profiles et sont combinées aux données géochimiques et pétrologiques. Les analyses textorales sont combinées aux données géochimiques dans un modèle qualitatif qui décrit la formation du périclase. Ceci élargit l'utilisation potentielle de la C5D aux systèmes de formation de minéraux controlés par les infiltrations fluides. THESIS ABSTRACT (GENERAL PUBLIC) Rock textures are essentially the result of a complex interaction of nucleation, growth and deformation as a function of changing physical conditions such as pressure and temperature. Igneous and metamorphic textures are especially attractive to study the different mechanisms of texture formation since most of the parameters like pressure-temperature-paths are quite well known for a variety of geological settings. The fact that textures are supposed to record the crystallization history of a rock traditionally allowed them to be used for geothermobarometry or dating. During the last decades the focus of metamorphic petrology changed from a static point of view, i.e. the representation of a texture as one single point in the petrogenetic grid towards a more dynamic view, where multiple metamorphic processes govern the texture formation, including non-equilibrium processes. This thesis tries to advance our understanding on the processes governing nucleation and growth of minerals in contact metamorphic environments and their dynamic interplay by using a combination of geochemical analyses (chemical-, isotope-, and trace element composition), statistical treatments of spatial data and numerical models. In a first part the thesis describes the formation of metamorphic olivine porphyroblast in the Ubehebe Peak contact aureole (USA). It is shown that not the commonly assumed succession of equilibrium reactions along a T-t-path formed the textures present in the rocks today, but rather the presence of a meta-stable reaction is responsible for forming the olivine porphyroblast. Consequently, the spatial distribution of metamorphic minerals within a contact aureole can no longer be regarded as a proxy for the temporal evolution of a single rock sample. Metamorphic peak temperatures for samples of the Ubehebe Peak contact aureole were determined using calcite-dolomite. This geothermometer is based on the temperature-dependent exchange of Mg between calcite and dolomite. The purpose of the second part of this thesis was to explain the interfering systematic scatter of measured Mg-content on different scales and thus to clarify the interpretation of metamorphic temperatures recorded in carbonates. Numerical quantitative forward models are used to evaluate the effect of several processes on the distribution of magnesium in individual calcite crystals and the modeling results were then compared to measured field. Information about the crystallization history is not only recorded in the chemical composition of grains, like isotope composition or mineral zoning. Crystal size distributions (CSD's) provide essential information about the complex interaction of nucleation and growth of minerals. CSD's of brucite pseudomorphs formed retrograde after periclase of the southern Adamello massif (Italy) are presented. A combination of the textural 3D-information with geochemistry data is then used to evaluate reaction kinetics and to constrain the actual reaction mechanism for the formation of periclase. The reaction is shown to be the consequence of the infiltration of a limited amount of a fluid phase at high temperatures. The composition of this fluid phase is in large disequilibrium with the rest of the rock resulting in very fast reaction rates. RESUME DE LA THESE POUR LE GRAND PUBLIC: La texture d'une roche résulte de l'interaction complexe entre les processus de nucléation, croissance et déformation, en fonction des variations de conditions physiques telles que la pression et la température. Les textures ignées et métamorphiques présentent un intérêt particulier pour l'étude des différents mécanismes à l'origine de ces textures, puisque la plupart des paramètres comme les chemin pression-température sont relativement bien contraints dans la plupart des environnements géologiques. Le fait que les textures soient supposées enregistrer l'histoire de cristallisation des roches permet leur utilisation pour la datation et la géothermobarométrie. Durant les dernières décennies, la recherche en pétrologie métamorphique a évolué depuis une visualisation statique, c'est-à-dire qu'une texture donnée correspondait à un point unique de la grille pétrogénétique, jusqu'à une visualisation plus dynamique, où les multiples processus métamorphiques qui gouvernent 1a formation d'une texture incluent des processus hors équilibre. Cette thèse a pour but d'améliorer les connaissances actuelles sur les processus gouvernant la nucléation et la croissance des minéraux lors d'épisodes de métamorphisme de contact et l'interaction dynamique existant entre nucléation et croissance. Pour cela, les analyses géochimiques (compositions chimiques en éléments majeurs et traces et composition isotopique), le traitement statistique des données spatiales et la modélisation numérique ont été combinés. Dans la première partie, cette thèse décrit la formation de porphyroblastes d'olivine métamorphique dans l'auréole de contact de l'Ubehebe Peak (USA). Il est montré que la succession généralement admise des réactions d'équilibre le long d'un chemin T-t ne peut pas expliquer les textures présentes dans les roches aujourd'hui. Cette thèse montre qu'il s'agirait plutôt d'une réaction métastable qui soit responsable de la formation des porphyroblastes d'olivine. En conséquence, la distribution spatiale des minéraux métamorphiques dans l'auréole de contact ne peut plus être interprétée comme le témoin de l'évolution temporelle d'un échantillon unique de roche. Les pics de température des échantillons de l'auréole de contact de l'Ubehebe Peak ont été déterminés grâce au géothermomètre calcite-dolomite. Celui-ci est basé sur l'échange du magnésium entre la calcite et la dolomite, qui est fonction de la température. Le but de la deuxième partie de cette thèse est d'expliquer la dispersion systématique de la composition en magnésium à différentes échelles, et ainsi d'améliorer l'interprétation des températures du métamorphisme enregistrées dans les carbonates. Des modèles numériques quantitatifs ont permis d'évaluer le rôle de différents processus sur la distribution du magnésium dans des cristaux de calcite individuels. Les résultats des modèles ont été comparés aux échantillons naturels. La composition chimique des grains, comme la composition isotopique ou la zonation minérale, n'est pas le seul témoin de l'histoire de la cristallisation. La distribution de la taille des cristaux (CSD) fournit des informations essentielles sur les interactions entre nucléation et croissance des minéraux. La CSD des pseudomorphes de brucite retrograde formés après le périclase dans le sud du massif Adamello (Italie) est présentée dans la troisième partie. La combinaison entre les données textorales en trois dimensions et les données géochimiques a permis d'évaluer les cinétiques de réaction et de contraindre les mécanismes conduisant à la formation du périclase. Cette réaction est présentée comme étant la conséquence de l'infiltration d'une quantité limitée d'une phase fluide à haute température. La composition de cette phase fluide est en grand déséquilibre avec le reste de la roche, ce qui permet des cinétiques de réactions très rapides.
Resumo:
This paper studies the relationship between the amount of publicinformation that stock market prices incorporate and the equilibriumbehavior of market participants. The analysis is framed in a static, NREEsetup where traders exchange vectors of assets accessing multidimensionalinformation under two alternative market structures. In the first(the unrestricted system), both informed and uninformed speculators cancondition their demands for each traded asset on all equilibrium prices;in the second (the restricted system), they are restricted to conditiontheir demand on the price of the asset they want to trade. I show thatinformed traders incentives to exploit multidimensional privateinformation depend on the number of prices they can condition upon whensubmitting their demand schedules, and on the specific price formationprocess one considers. Building on this insight, I then give conditionsunder which the restricted system is more efficient than the unrestrictedsystem.