937 resultados para Coupled Analysis, SPH, Plastic Road Safety Barriers, Impact, Non-structural Mass
Resumo:
Atmospheric aerosol particles directly impact air quality and participate in controlling the climate system. Organic Aerosol (OA) in general accounts for a large fraction (10–90%) of the global submicron (PM1) particulate mass. Chemometric methods for source identification are used in many disciplines, but methods relying on the analysis of NMR datasets are rarely used in atmospheric sciences. This thesis provides an original application of NMR-based chemometric methods to atmospheric OA source apportionment. The method was tested on chemical composition databases obtained from samples collected at different environments in Europe, hence exploring the impact of a great diversity of natural and anthropogenic sources. We focused on sources of water-soluble OA (WSOA), for which NMR analysis provides substantial advantages compared to alternative methods. Different factor analysis techniques are applied independently to NMR datasets from nine field campaigns of the project EUCAARI and allowed the identification of recurrent source contributions to WSOA in European background troposphere: 1) Marine SOA; 2) Aliphatic amines from ground sources (agricultural activities, etc.); 3) Biomass burning POA; 4) Biogenic SOA from terpene oxidation; 5) “Aged” SOAs, including humic-like substances (HULIS); 6) Other factors possibly including contributions from Primary Biological Aerosol Particles, and products of cooking activities. Biomass burning POA accounted for more than 50% of WSOC in winter months. Aged SOA associated with HULIS was predominant (> 75%) during the spring-summer, suggesting that secondary sources and transboundary transport become more important in spring and summer. Complex aerosol measurements carried out, involving several foreign research groups, provided the opportunity to compare source apportionment results obtained by NMR analysis with those provided by more widespread Aerodyne aerosol mass spectrometers (AMS) techniques that now provided categorization schemes of OA which are becoming a standard for atmospheric chemists. Results emerging from this thesis partly confirm AMS classification and partly challenge it.
Resumo:
In der vorliegenden Dissertation werden die Kernreaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne im astrophysikalisch interessanten Energiebereich von E alpha = 1000 keV bis E alpha = 2450 keV untersucht.rnrnDie Experimente wurden am Nuclear Structure Laboratory der University of Notre Dame (USA) mit dem vor Ort befindlichen Van-de-Graaff Beschleuniger KN durchgeführt. Hierbei wurden Festkörpertargets mit evaporiertem Magnesium oder anodisiertem Sauerstoff mit alpha-Teilchen beschossen und die freigesetzten Neutronen untersucht. Zum Nachweis der freigesetzten Neutronen wurde mit Hilfe von Computersimulationen ein Neutrondetektor basierend auf rn3He-Zählrohren konstruiert. Weiterhin wurden aufgrund des verstärkten Auftretens von Hintergrundreaktionen verschiedene Methoden zur Datenanalyse angewendet.rnrnAbschliessend wird mit Hilfe von Netzwerkrechnungen der Einfluss der Reaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne auf die stellare Nukleosynthese untersucht.rn
Resumo:
Natürliche hydraulische Bruchbildung ist in allen Bereichen der Erdkruste ein wichtiger und stark verbreiteter Prozess. Sie beeinflusst die effektive Permeabilität und Fluidtransport auf mehreren Größenordnungen, indem sie hydraulische Konnektivität bewirkt. Der Prozess der Bruchbildung ist sowohl sehr dynamisch als auch hoch komplex. Die Dynamik stammt von der starken Wechselwirkung tektonischer und hydraulischer Prozesse, während sich die Komplexität aus der potentiellen Abhängigkeit der poroelastischen Eigenschaften von Fluiddruck und Bruchbildung ergibt. Die Bildung hydraulischer Brüche besteht aus drei Phasen: 1) Nukleation, 2) zeitabhängiges quasi-statisches Wachstum so lange der Fluiddruck die Zugfestigkeit des Gesteins übersteigt, und 3) in heterogenen Gesteinen der Einfluss von Lagen unterschiedlicher mechanischer oder sedimentärer Eigenschaften auf die Bruchausbreitung. Auch die mechanische Heterogenität, die durch präexistierende Brüche und Gesteinsdeformation erzeugt wird, hat großen Einfluß auf den Wachstumsverlauf. Die Richtung der Bruchausbreitung wird entweder durch die Verbindung von Diskontinuitäten mit geringer Zugfestigkeit im Bereich vor der Bruchfront bestimmt, oder die Bruchausbreitung kann enden, wenn der Bruch auf Diskontinuitäten mit hoher Festigkeit trifft. Durch diese Wechselwirkungen entsteht ein Kluftnetzwerk mit komplexer Geometrie, das die lokale Deformationsgeschichte und die Dynamik der unterliegenden physikalischen Prozesse reflektiert. rnrnNatürliche hydraulische Bruchbildung hat wesentliche Implikationen für akademische und kommerzielle Fragestellungen in verschiedenen Feldern der Geowissenschaften. Seit den 50er Jahren wird hydraulisches Fracturing eingesetzt, um die Permeabilität von Gas und Öllagerstätten zu erhöhen. Geländebeobachtungen, Isotopenstudien, Laborexperimente und numerische Analysen bestätigen die entscheidende Rolle des Fluiddruckgefälles in Verbindung mit poroelastischen Effekten für den lokalen Spannungszustand und für die Bedingungen, unter denen sich hydraulische Brüche bilden und ausbreiten. Die meisten numerischen hydromechanischen Modelle nehmen für die Kopplung zwischen Fluid und propagierenden Brüchen vordefinierte Bruchgeometrien mit konstantem Fluiddruck an, um das Problem rechnerisch eingrenzen zu können. Da natürliche Gesteine kaum so einfach strukturiert sind, sind diese Modelle generell nicht sonderlich effektiv in der Analyse dieses komplexen Prozesses. Insbesondere unterschätzen sie die Rückkopplung von poroelastischen Effekten und gekoppelte Fluid-Festgestein Prozesse, d.h. die Entwicklung des Porendrucks in Abhängigkeit vom Gesteinsversagen und umgekehrt.rnrnIn dieser Arbeit wird ein zweidimensionales gekoppeltes poro-elasto-plastisches Computer-Model für die qualitative und zum Teil auch quantitativ Analyse der Rolle lokalisierter oder homogen verteilter Fluiddrücke auf die dynamische Ausbreitung von hydraulischen Brüchen und die zeitgleiche Evolution der effektiven Permeabilität entwickelt. Das Programm ist rechnerisch effizient, indem es die Fluiddynamik mittels einer Druckdiffusions-Gleichung nach Darcy ohne redundante Komponenten beschreibt. Es berücksichtigt auch die Biot-Kompressibilität poröser Gesteine, die implementiert wurde um die Kontrollparameter in der Mechanik hydraulischer Bruchbildung in verschiedenen geologischen Szenarien mit homogenen und heterogenen Sedimentären Abfolgen zu bestimmen. Als Resultat ergibt sich, dass der Fluiddruck-Gradient in geschlossenen Systemen lokal zu Störungen des homogenen Spannungsfeldes führen. Abhängig von den Randbedingungen können sich diese Störungen eine Neuausrichtung der Bruchausbreitung zur Folge haben kann. Durch den Effekt auf den lokalen Spannungszustand können hohe Druckgradienten auch schichtparallele Bruchbildung oder Schlupf in nicht-entwässerten heterogenen Medien erzeugen. Ein Beispiel von besonderer Bedeutung ist die Evolution von Akkretionskeilen, wo die große Dynamik der tektonischen Aktivität zusammen mit extremen Porendrücken lokal starke Störungen des Spannungsfeldes erzeugt, die eine hoch-komplexe strukturelle Entwicklung inklusive vertikaler und horizontaler hydraulischer Bruch-Netzwerke bewirkt. Die Transport-Eigenschaften der Gesteine werden stark durch die Dynamik in der Entwicklung lokaler Permeabilitäten durch Dehnungsbrüche und Störungen bestimmt. Möglicherweise besteht ein enger Zusammenhang zwischen der Bildung von Grabenstrukturen und großmaßstäblicher Fluid-Migration. rnrnDie Konsistenz zwischen den Resultaten der Simulationen und vorhergehender experimenteller Untersuchungen deutet darauf hin, dass das beschriebene numerische Verfahren zur qualitativen Analyse hydraulischer Brüche gut geeignet ist. Das Schema hat auch Nachteile wenn es um die quantitative Analyse des Fluidflusses durch induzierte Bruchflächen in deformierten Gesteinen geht. Es empfiehlt sich zudem, das vorgestellte numerische Schema um die Kopplung mit thermo-chemischen Prozessen zu erweitern, um dynamische Probleme im Zusammenhang mit dem Wachstum von Kluftfüllungen in hydraulischen Brüchen zu untersuchen.
Resumo:
A dynamic deterministic simulation model was developed to assess the impact of different putative control strategies on the seroprevalence of Neospora caninum in female Swiss dairy cattle. The model structure comprised compartments of "susceptible" and "infected" animals (SI-model) and the cattle population was divided into 12 age classes. A reference model (Model 1) was developed to simulate the current (status quo) situation (present seroprevalence in Switzerland 12%), taking into account available demographic and seroprevalence data of Switzerland. Model 1 was modified to represent four putative control strategies: testing and culling of seropositive animals (Model 2), discontinued breeding with offspring from seropositive cows (Model 3), chemotherapeutic treatment of calves from seropositive cows (Model 4), and vaccination of susceptible and infected animals (Model 5). Models 2-4 considered different sub-scenarios with regard to the frequency of diagnostic testing. Multivariable Monte Carlo sensitivity analysis was used to assess the impact of uncertainty in input parameters. A policy of annual testing and culling of all seropositive cattle in the population reduced the seroprevalence effectively and rapidly from 12% to <1% in the first year of simulation. The control strategies with discontinued breeding with offspring from all seropositive cows, chemotherapy of calves and vaccination of all cattle reduced the prevalence more slowly than culling but were still very effective (reduction of prevalence below 2% within 11, 23 and 3 years of simulation, respectively). However, sensitivity analyses revealed that the effectiveness of these strategies depended strongly on the quality of the input parameters used, such as the horizontal and vertical transmission factors, the sensitivity of the diagnostic test and the efficacy of medication and vaccination. Finally, all models confirmed that it was not possible to completely eradicate N. caninum as long as the horizontal transmission process was not interrupted.
Resumo:
This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.
Resumo:
OBJECTIVE: In this article, we review the impact of vision on older people's night driving abilities. Driving is the preferred and primary mode of transport for older people. It is a complex activity where intact vision is seminal for road safety. Night driving requires mesopic rather than scotopic vision, because there is always some light available when driving at night. Scotopic refers to night vision, photopic refers to vision under well-lit conditions, and mesopic vision is a combination of photopic and scotopic vision in low but not quite dark lighting situations. With increasing age, mesopic vision decreases and glare sensitivity increases, even in the absence of ocular diseases. Because of the increasing number of elderly drivers, more drivers are affected by night vision difficulties. Vision tests, which accurately predict night driving ability, are therefore of great interest. METHODS: We reviewed existing literature on age-related influences on vision and vision tests that correlate or predict night driving ability. RESULTS: We identified several studies that investigated the relationship between vision tests and night driving. These studies found correlations between impaired mesopic vision or increased glare sensitivity and impaired night driving, but no correlation was found among other tests; for example, useful field of view or visual field. The correlation between photopic visual acuity, the most commonly used test when assessing elderly drivers, and night driving ability has not yet been fully clarified. CONCLUSIONS: Photopic visual acuity alone is not a good predictor of night driving ability. Mesopic visual acuity and glare sensitivity seem relevant for night driving. Due to the small number of studies evaluating predictors for night driving ability, further research is needed.
Resumo:
An often-cited reason for studying the process of invasion by alien species is that the understanding sought can be used to mitigate the impacts of the invaders. Here, we present an analysis of the correlates of local impacts of established alien bird and mammal species in Europe, using a recently described metric to quantify impact. Large-bodied, habitat generalist bird and mammal species that are widespread in their native range, have the greatest impacts in their alien European ranges, supporting our hypothesis that surrogates for the breadth and the amount of resources a species uses are good indicators of its impact. However, not all surrogates are equally suitable. Impacts are generally greater for mammal species giving birth to larger litters, but in contrast are greater for bird species laying smaller clutches. There is no effect of diet breadth on impacts in birds or mammals. On average, mammals have higher impacts than birds. However, the relationships between impact and several traits show common slopes for birds and mammals, and relationships between impact and body mass and latitude do not differ between birds and mammals. These results may help to anticipate which species would have large impacts if introduced, and so direct efforts to prevent such introductions.
Resumo:
To say that regionalism is gaining momentum has become an understatement. To mourn the lack of progress in multilateral trade rule-making is a commonplace in the discourse of politicians regretting the WTO negotiation standstill, and of “know-what-to-do” academics. The real problem is the uneven level-playing field resulting from increasing differences of rules and obligations. The Transatlantic Trade and Investment Partnership Agreement (TTIP) is a very ambitious project. WTI studies in 2014 have shown that the implications for Switzerland could be enormous. But even the combined market power of the two TTIP participants – the EU and the USA – will not level the playing field impairing the regulatory framework, and the market access barriers for trade in agriculture. Such differences will remain in three areas which, incidentally, are also vital for a global response to the food security challenge to feed 9 billion people before the year 2050: market access, non-tariff barriers, and trade-distorting domestic support programmes. This means that without multilateral progress the TTIP and other so-called mega-regionals, if successfully concluded, will exacerbate rather than lessen trade distortions. While this makes farmers in rich countries safer from competition, competitive production in all countries will be hampered. Consequently, and notwithstanding the many affirmations to the contrary, farm policies worldwide will continue to only address farmer security without increasing global food security. What are the implications of the TTIP for Swiss agriculture? This article, commissioned by Waseda University in Tokyo, finds that the failure to achieve further reforms – including a number of areas where earlier reforms have been reversed – is presenting Switzerland and Swiss agriculture with a terrible dilemma in the eventuality of a successful conclusion of the TTIP. If Swiss farm production is to survive for more than another generation, continuous reform efforts are required, and over-reliance on the traditional instruments of border protection and product support is to be avoided. Without a substantial TTIP obliging Switzerland to follow suit, autonomous reforms will remain extremely fragile.
Resumo:
Vertical integration is grounded in economic theory as a corporate strategy for reducing cost and enhancing efficiency. There were three purposes for this dissertation. The first was to describe and understand vertical integration theory. The review of the economic theory established vertical integration as a corporate cost reduction strategy in response to environmental, structural and performance dimensions of the market. The second purpose was to examine vertical integration in the context of the health care industry, which has greater complexity, higher instability, and more unstable demand than other industries, although many of the same dimensions of the market supported a vertical integration strategy. Evidence on the performance of health systems after integration revealed mixed results. Because the market continues to be turbulent, hybrid non-owned integration in the form of alliances have increased to over 40% of urban hospitals. The third purpose of the study was to examine the application of vertical integration in health care and evaluate the effects. The case studied was an alliance formed between a community hospital and a tertiary medical center to facilitate vertical integration of oncology services while maintaining effectiveness and preserving access. The economic benefits for 1934 patients were evaluated in the delivery system before and after integration with a more detailed economic analysis of breast, lung, colon/rectal, and non-malignant cases. A regression analysis confirmed the relationship between the independent variables of age, sex, location of services, race, stage of disease, and diagnosis, and the dependent variable, cost. The results of the basic regression model, as well as the regression with first-order interaction terms, were statistically significant. The study shows that vertical integration at an intermediate health care system level has economic benefits. If the pre-integration oncology group had been treated in the post-integration model, the expected cost savings from integration would be 31.5%. Quality indicators used were access to health care services and research treatment protocols, and access was preserved in the integrated model. Using survival as a direct quality outcome measure, the survival of lung cancer patients was statistically the same before and after integration. ^
Resumo:
Objective. To assess differences in body weight, body composition, total cholesterol, blood pressure, and blood glucose between OC users and non-users age 18-30 y before and after a 15-week cardiovascular exercise program in Houston, TX from 2003 to 2007.^ Study Design. Secondary analysis of prospective data. ^ Study Subjects. 453 Non-Hispanic white (NHW), Hispanic, and African American (AA) women age 18-30 y with no previous live birth, a history of menstruating, no use of other hormonal contraceptives or medications, no menopause or hysterectomy, and no current pregnancies.^ Measurements. Demographic data, medication use, and menstrual history were assessed via self-administered questionnaires at baseline. Anthropometric and laboratory measures were taken at baseline and 15-weeks. ^ Data Analysis. Linear regression assessed the association between OC use and study variables at baseline, and the change in study variables from baseline to 15-weeks. Logistic regression assessed the association between OC use and CVD risk. Each analysis was also stratified by race/ethnicity. ^ Results. At baseline, OC users had higher total cholesterol (p<.0005) and were above cholesterol risk cut points for CVD (OR=4.3, 95% CI=2.4-7.7) compared to non-users. At baseline, OC use was also associated with higher diastolic blood pressure (p=.018) compared to non-users, primarily in non-Hispanic whites (p=.007). OC use was associated with lower blood glucose compared to non-users in Hispanics only (p=.008). OC use was associated with absolute change in diastolic blood pressure (p=.044) and total cholesterol (p=.003). There was evidence that OC use may affect individuals differently based on race/ethnicity for certain obesity and CVD risk factors.^ Conclusions. OC users and non-users responded similarly to a 15-week cardiovascular exercise program. Exceptions included a greater change in diastolic blood pressure and total cholesterol among NHW and Hispanic OC users compared to non-users after exercise intervention. At baseline, OC use was associated with diastolic blood pressure and was most strongly associated with increased levels of total cholesterol. OC users were at greater risk of having total cholesterol above CVD risk cut points than non-users.^
Resumo:
Low-molecular-weight (LMW) alcohols are produced during the microbial degradation of organic matter from precursors such as lignin, pectin, and carbohydrates. The biogeochemical behavior of these alcohols in marine sediment is poorly constrained but potentially central to carbon cycling. Little is known about LMW alcohols in sediment pore waters because of their low concentrations and high water miscibility, both of which pose substantial analytical challenges. In this study, three alternative methods were adapted for the analysis of trace amounts of methanol and ethanol in small volumes of saline pore waters: direct aqueous injection (DAI), solid-phase microextraction (SPME), and purge and trap (P&T) in combination with gas chromatography (GC) coupled to either a flame ionization detector (FID) or a mass spectrometer (MS). Key modifications included the desalination of samples prior to DAI, the use of a threaded midget bubbler to purge small-volume samples under heated conditions and the addition of salt during P&T. All three methods were validated for LMW alcohol analysis, and the lowest detection limit (60 nM and 40 nM for methanol and ethanol, respectively) was achieved with the P&T technique. With these methods, ambient concentrations of volatile alcohols were determined for the first time in marine sediment pore waters of the Black Sea and the Gulf of Mexico. A strong correlation between the two compounds was observed and tentatively interpreted as being controlled by similar sources and sinks at the examined stations.
Resumo:
Fifty samples of Roman time soil preserved under the thick ash layer of the A.D.79 eruption of Mt Vesuvius were studied by pollen analysis: 33 samples from a former vineyard surrounding a Villa Rustica at Boscoreale (excavation site 40 x 50 m), 13 samples taken along the 60 m long swimming pool in the sculpture garden of the Villa of Poppaea at Oplontis, and four samples from the formal garden (12.4 x 17.5 m) of the House of the Gold Bracelet in Pompeii. To avoid contamination with modern pollen all samples were taken immediately after uncovering a new portion of the A.D. 79 soil. For comparison also samples of modern Italian soils were studied. Using standard methods for pollen preparation the pollen content of 15 of the archaeological samples proved to be too little to reach a pollen sum of more than 100 grains. The pollen spectra of these samples are not shown in the pollen tables. (Flotation with a sodium tungstate solution, Na2WO4, D = 2.05, following treatment with HCl and NaOH would probably have given a somewhat better result. This method was, however, not available as too expensive at that time.) Although the archaeological samples were taken a few meters apart their pollen values differ very much from one sample to the other. E.g., at Boscoreale (SW quarter). the pollen values of Pinus range from 1.5 to 54.5% resp. from 1 to 244 pine pollen grains per 1 gram of soil, the extremes even found under pine trees. Vitis pollen was present in 7 of the 11 vineyard samples from Boscoreale (NE quarter) only. Although a maximum of 21.7% is reached, the values of Vitis are mostly below 1.5%. Even the values of common weeds differ very much, not only at Boscoreale, but also at the other two sites. The pollen concentration values show similar variations: 3 to 3053 grains and spores were found in 1 g of soil. The mean value (290) is much less than the number of pollen grains, which would fall on 1 cm2 of soil surface during one year. In contrast, the pollen and spore concentrations of the recent soil samples, treated in exactly the same manner, range from 9313 to almost 80000 grains per 1 g of soil. Evidently most of the Roman time pollen has disappeared since its deposition, the reasons not being clear. Not even species which are known to have been cultivated in the garden of Oplontis, like Citrus and Nerium, plant species with easily distinguishable pollen grains, could be traced by pollen analysis. The loss of most of the pollen grains originally contained in the soil prohibits any detailed interpretation of the Pompeian pollen data. The pollen counts merely name plant species which grew in the region, but not necessarily on the excavated plots.
Resumo:
We offer micro-econometric evidence for a positive impact of rural electrification on the nutritional status of children under five as measured by height-for-age Z-score (HAZ) in rural Bangladesh. In most estimates, access to electricity is found to improve HAZ by more than 0.15 points and this positive impact comes from increased wealth and reduced fertility, even though the evidence for the latter is weak. We also analyze the causal channels through the local health facility and exposure to television. We find no evidence for the presence of the former channel and mixed evidence for the latter.
Resumo:
El modo tradicional de estimar el nivel de seguridad vial es el registro de accidentes de tráfico, sin embargo son altamente variables, aleatorios y necesitan un periodo de registro de al menos 3 años. Existen metodologías preventivas en las cuales no es necesario que ocurra un accidente para determinar el nivel de seguridad de una intersección, como lo es la técnica de los conflictos de tráfico, que introduce las mediciones alternativas de seguridad como cuantificadoras del riesgo de accidente. El objetivo general de la tesis es establecer una metodología que permita clasificar el riesgo en intersecciones interurbanas, en función del análisis de conflictos entre vehículos, realizado mediante las variables alternativas o indirectas de seguridad vial. La metodología para el análisis y evaluación temprana de la seguridad en una intersección, estará basada en dos medidas alternativas de seguridad: el tiempo hasta la colisión y el tiempo posterior a la invasión de la trayectoria. El desarrollo experimental se realizó mediante estudios de campo, para la parte exploratoria de la investigación, se seleccionaron 3 intersecciones interurbanas en forma de T donde se obtuvieron las variables que caracterizan los conflictos entre vehículos; luego mediante técnicas de análisis multivariante, se obtuvo los modelos de clasificación del riesgo cualitativo y cuantitativo. Para la homologación y el estudio final de concordancia entre el índice propuesto y el modelo de clasificación, se desarrollaron nuevos estudios de campo en 6 intersecciones interurbanas en forma de T. El índice de riesgo obtenido resulta una herramienta muy útil para realizar evaluaciones rápidas conducentes a estimar la peligrosidad de una intersección en T, debido a lo simple y económico que resulta obtener los registros de datos en campo, por medio de una rápida capacitación a operarios; la elaboración del informe de resultados debe ser por un especialista. Los índices de riesgo obtenidos muestran que las variables originales más influyentes son las mediciones de tiempo. Se pudo determinar que los valores más altos del índice de riesgo están relacionados a un mayor riesgo de que un conflicto termine en accidente. Dentro de este índice, la única variable cuyo aporte es proporcionalmente directo es la velocidad de aproximación, lo que concuerda con lo que sucede en un conflicto, pues una velocidad excesiva se manifiesta como un claro factor de riesgo ya que potencia todos los fallos humanos en la conducción. Una de las principales aportaciones de esta tesis doctoral a la ingeniería de carreteras, es la posibilidad de aplicación de la metodología por parte de administraciones de carreteras locales, las cuales muchas veces cuentan con recursos de inversión limitados para efectuar estudios preventivos, sobretodo en países en vías de desarrollo. La evaluación del riesgo de una intersección luego de una mejora en cuanto a infraestructura y/o dispositivos de control de tráfico, al igual que un análisis antes – después, pero sin realizar una comparación mediante la ocurrencia de accidentes, sino que por medio de la técnica de conflictos de tráfico, se puede convertir en una aplicación directa y económica. Además, se pudo comprobar que el análisis de componentes principales utilizado en la creación del índice de riesgo de la intersección, es una herramienta útil para resumir todo el conjunto de mediciones que son posibles de obtener con la técnica de conflictos de tráfico y que permiten el diagnóstico del riesgo de accidentalidad en una intersección. En cuanto a la metodología para la homologación de los modelos, se pudo establecer la validez y confiabilidad al conjunto de respuestas entregadas por los observadores en el registro de datos en campo, ya que los resultados de la validación establecen que la medición de concordancia de las respuestas entregadas por los modelos y lo observado, son significativas y sugieren una alta coincidencia entre ellos. ABSTRACT The traditional way of estimating road safety level is the record of occurrence of traffic accidents; however, they are highly variable, random, and require a recording period of at least three years. There are preventive methods which do not need an accident to determine the road safety level of an intersection, such as traffic conflict technique, which introduces surrogate safety measures as parameters for the evaluation of accident risks. The general objective of the thesis is to establish a methodology that will allow the classification of risk at interurban intersections as a function of the analysis of conflicts between vehicles performed by means of surrogate road safety variables. The proposal of a methodology for the analysis and early evaluation of safety at an intersection will be based on two surrogate safety measures: the time to collision and the post encroachment time. On the other hand, the experimental development has taken place by means of field studies in which the exploratory part of the investigation selected three interurban T-intersections where the application of the traffic conflict technique gave variables that characterize the conflicts between vehicles; then, using multivariate analysis techniques, the models for the classification of qualitative and quantitative risk were obtained. With the models new field studies were carried out at six interurban Tintersections with the purpose of developing the homologation and the final study of the agreement between the proposed index and the classification model. The risk index obtained is a very useful tool for making rapid evaluations to estimate the hazard of a T-intersection, as well as for getting simply and economically the field data records after a fast training of the workers and then preparing the report of results by a specialist. The risk indices obtained show that the most influential original variables are the measurements of time. It was determined that the highest risk index values are related with greater risk of a conflict resulting in an accident. Within this index, the only variable whose contribution is proportionally direct is the approach speed, in agreement with what happens in a conflict, because excessive speed appears as a clear risk factor at an intersection because it intensifies all the human driving faults. One of the main contributions of this doctoral thesis to road engineering is the possibility of applying the methodology by local road administrations, which very often have limited investment resources to carry out these kinds of preventive studies, particularly in developing countries. The evaluation of the risk at an intersection after an improvement in terms of infrastructure and/or traffic control devices, the same as a before/after analysis, without comparison of accident occurrence but by means of the traffic conflict technique, can become a direct and economical application. It is also shown that main components analysis used for producing the risk index of the intersection is a useful tool for summarizing the whole set of measurements that can be obtained with the traffic conflict technique and allow diagnosing accident risk at an intersection. As to the methodology for the homologation of the models, the validity and reliability of the set of responses delivered by the observers recording the field data could be established, because the results of the validation show that agreement between the observations and the responses delivered by the models is significant and highly coincident.
Resumo:
Many countries around the world are implementing Public?Private?Partnership (PPP) contacts to manage road infrastructure. In some of these contracts the public sector introduces economic incentives to the private operator to foster the accomplishment of social goals. One of the incentives that have been introduced in some PPP contracts is related to safety in such a way that the better the safety outcome the greater will be the economic reward to the contractor. The aim of this paper is at identify whether the incentives to improve road safety in highway PPPs are ultimately effective in improving safety ratios. To this end Poisson and negative binomial regression models have been applied using information from highway sections in Spain. The findings indicate that even though road safety is highly influenced by variables that are not much controllable by the contractor such as the Average Annual Daily Traffic and the percentage of heavy vehicles, the implementation of safety incentives in PPPs has a positive influence in the reduction of fatalities, injuries and accidents.