972 resultados para Elasticity and anelasticity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of advanced materials in infrastructure has grown rapidly in recent years mainly because of their potential to ease the construction, extend the service life, and improve the performance of structures. Ultra-high performance concrete (UHPC) is one such material considered as a novel alternative to conventional concrete. The material microstructure in UHPC is optimized to significantly improve its material properties including compressive and tensile strength, modulus of elasticity, durability, and damage tolerance. Fiber-reinforced polymer (FRP) composite is another novel construction material with excellent properties such as high strength-to-weight and stiffness-to-weight ratios and good corrosion resistance. Considering the exceptional properties of UHPC and FRP, many advantages can result from the combined application of these two advanced materials, which is the subject of this research. The confinement behavior of UHPC was studied for the first time in this research. The stress-strain behavior of a series of UHPC-filled fiber-reinforced polymer (FRP) tubes with different fiber types and thicknesses were tested under uniaxial compression. The FRP confinement was shown to significantly enhance both the ultimate strength and strain of UHPC. It was also shown that existing confinement models are incapable of predicting the behavior of FRP-confined UHPC. Therefore, new stress-strain models for FRP-confined UHPC were developed through an analytical study. In the other part of this research, a novel steel-free UHPC-filled FRP tube (UHPCFFT) column system was developed and its cyclic behavior was studied. The proposed steel-free UHPCFFT column showed much higher strength and stiffness, with a reasonable ductility, as compared to its conventional reinforced concrete (RC) counterpart. Using the results of the first phase of column tests, a second series of UHPCFFT columns were made and studied under pseudo-static loading to study the effect of column parameters on the cyclic behavior of UHPCFFT columns. Strong correlations were noted between the initial stiffness and the stiffness index, and between the moment capacity and the reinforcement index. Finally, a thorough analytical study was carried out to investigate the seismic response of the proposed steel-free UHPCFFT columns, which showed their superior earthquake resistance, as compared to their RC counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hypertension, a major risk factor in the cardiovascular system, is characterized by an increase in the arterial blood pressure. High dietary sodium is linked to multiple cardiovascular disorders including hypertension. Salt sensitivity, a measure of how the blood pressure responds to salt intake is observed in more than 50% of the hypertension cases. Nitric Oxide (NO), as an endogenous vasodilator serves many important biological roles in the cardiovascular physiology including blood pressure regulation. The physiological concentrations for NO bioactivity are reported to be in 0-500 nM range. Notably, the vascular response to NO is highly regulated within a small concentration spectrum. Hence, much uncertainty surrounds how NO modulates diverse signaling mechanisms to initiate vascular relaxation and alleviate hypertension. Regulating the availability of NO in the vasculature has demonstrated vasoprotective effects. In addition, modulating the NO release by different means has proved to restore endothelial function. In this study we addressed parameters that regulated NO release in the vasculature, in physiology and pathophysiology such as salt sensitive hypertension. We showed that, in the rat mesenteric arterioles, Ca2+ induced rapid relaxation (time constants 20.8 ± 2.2 sec) followed with a much slower constriction after subsequent removal of the stimulus (time constants 104.8 ± 10.0 sec). An interesting observation was that a fourfold increase in the Ca 2+ frequency improved the efficacy of arteriolar relaxation by 61.1%. Our results suggested that, Ca2+ frequency-dependent transient release of NO from the endothelium carried encoded information; which could be translated into different steady state vascular tone. Further, Agmatine, a metabolite of L-arginine, as a ligand, was observed to relax the mesenteric arterioles. These relaxations were NO-dependent and occurred via &agr;-2 receptor activity. The observed potency of agmatine (EC50, 138.7 ± 12.1 ± μM; n=22), was 40 fold higher than L-arginine itself (EC50, 18.3 ± 1.3 mM; n = 5). This suggested us to propose alternative parallel mechanism for L-arginine mediated vascular relaxation via arginine decarboxylase activity. In addition, the biomechanics of rat mesentery is important in regulation of vascular tone. We developed 2D finite element models that described the vascular mechanics of rat mesentery. With an inverse estimation approach, we identified the elasticity parameters characterizing alterations in normotensive and hypertensive Dahl rats. Our efforts were towards guiding current studies that optimized cardiovascular intervention and assisted in the development of new therapeutic strategies. These observations may have significant implications towards alternatives to present methods for NO delivery as a therapeutic target. Our work shall prove to be beneficial in assisting the delivery of NO in the vasculature thus minimizing the cardiovascular risk in handling abnormalities, such as hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research first evaluated the effects of urban wildland interface on reproductive biology of the Big Pine Partridge Pea, Chamaecrista keyensis, an understory herb that is endemic to Big Pine Key, Florida. I found that C. keyensis was self-compatible, but depended on bees for seed set. Furthermore, individuals of C. keyensis in urban habitats suffered higher seed predation and therefore set fewer seeds than forest interior plants. ^ I then focused on the effects of fire at different times of the year, summer (wet) and winter (dry), on the population dynamics and population viability of C. keyensis. I found that C. keyensis population recovered faster after winter burns and early summer burns (May–June) than after late summer burns (July–September) due to better survival and seedling recruitment following former fires. Fire intensity had positive effects on reproduction of C. keyensis. In contrast, no significant fire intensity effects were found on survival, growth, and seedling recruitment. This indicated that better survival and seedling recruitment following winter and early summer burns (compared with late summer burns) were due to the reproductive phenology of the plant in relation to fires rather than differences in fire intensity. Deterministic population modeling showed that time since fire significantly affected the finite population growth rates (λ). Particularly, recently burned plots had the largest λ. In addition, effects of timing of fires on λ were most pronounced the year of burn, but not the subsequent years. The elasticity analyses suggested that maximizing survival is an effective way to minimize the reduction in finite population growth rate the year of burn. Early summer fires or dry-season fires may achieve this objective. Finally, stochastic simulations indicated that the C. keyensis population had lower extinction risk and population decline probability if burned in the winter than in the late summer. A fire frequency of approximately 7 years would create the lowest extinction probability for C. keyensis. A fire management regime including a wide range of burning seasons may be essential for the continued existence of C. keyensis and other endemic species of pine rockland on Big Pine Key. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the relationships among ethnicity/race, lifestyle factors, phylloquinone (vitamin K₁) intake, and arterial pulse pressure in a nationally representative sample of older adults from four ethnic/racial groups: non-Hispanic Whites, non-Hispanic Blacks, Mexican Americans, and other Hispanics. This was a cross-sectional study of U.S. representative sample with data from the National Health and Nutrition Examination Surveys, 2007-2008 and 2009-2010 of adults aged 50 years and older (N = 5296). Vitamin K intake was determined by 24-hour recall. Pulse pressure was calculated as the difference between the averages of systolic blood pressure and diastolic blood pressure. Compared to White non-Hispanics, the other ethnic/racial groups were more likely to have inadequate vitamin K₁ intake. Inadequate vitamin K₁ intake was an independent predictor of high arterial pulse pressure. This was the first study that compared vitamin K₁ inadequacy with arterial pulse pressure across ethnicities/races in U.S. older adults. These findings suggest that vitamin K screening may be a beneficial marker for the health of older adults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation consists of three separate essays on job search and labor market dynamics. In the first essay, “The Impact of Labor Market Conditions on Job Creation: Evidence from Firm Level Data”, I study how much changes in labor market conditions reduce employment fluctuations over the business cycle. Changes in labor market conditions make hiring more expensive during expansions and cheaper during recessions, creating counter-cyclical incentives for job creation. I estimate firm level elasticities of labor demand with respect to changes in labor market conditions, considering two margins: changes in labor market tightness and changes in wages. Using employer-employee matched data from Brazil, I find that all firms are more sensitive to changes in wages rather than labor market tightness, and there is substantial heterogeneity in labor demand elasticity across regions. Based on these results, I demonstrate that changes in labor market conditions reduce the variance of employment growth over the business cycle by 20% in a median region, and this effect is equally driven by changes along each margin. Moreover, I show that the magnitude of the effect of labor market conditions on employment growth can be significantly affected by economic policy. In particular, I document that the rapid growth of the national minimum wages in Brazil in 1997-2010 amplified the impact of the change in labor market conditions during local expansions and diminished this impact during local recessions.

In the second essay, “A Framework for Estimating Persistence of Local Labor

Demand Shocks”, I propose a decomposition which allows me to study the persistence of local labor demand shocks. Persistence of labor demand shocks varies across industries, and the incidence of shocks in a region depends on the regional industrial composition. As a result, less diverse regions are more likely to experience deeper shocks, but not necessarily more long lasting shocks. Building on this idea, I propose a decomposition of local labor demand shocks into idiosyncratic location shocks and nationwide industry shocks and estimate the variance and the persistence of these shocks using the Quarterly Census of Employment and Wages (QCEW) in 1990-2013.

In the third essay, “Conditional Choice Probability Estimation of Continuous- Time Job Search Models”, co-authored with Peter Arcidiacono and Arnaud Maurel, we propose a novel, computationally feasible method of estimating non-stationary job search models. Non-stationary job search models arise in many applications, where policy change can be anticipated by the workers. The most prominent example of such policy is the expiration of unemployment benefits. However, estimating these models still poses a considerable computational challenge, because of the need to solve a differential equation numerically at each step of the optimization routine. We overcome this challenge by adopting conditional choice probability methods, widely used in dynamic discrete choice literature, to job search models and show how the hazard rate out of unemployment and the distribution of the accepted wages, which can be estimated in many datasets, can be used to infer the value of unemployment. We demonstrate how to apply our method by analyzing the effect of the unemployment benefit expiration on duration of unemployment using the data from the Survey of Income and Program Participation (SIPP) in 1996-2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fibronectin (FN) is a large extracellular matrix (ECM) protein that is made up of

type I (FNI), type II (FNII), & type III (FNIII) domains. It assembles into an insoluble

supra-­‐‑molecular structure: the fibrillar FN matrix. FN fibrillogenesis is a cell‐‑mediated process, which is initiated when FN binds to integrins on the cell surface. The FN matrix plays an important role in cell migration, proliferation, signaling & adhesion. Despite decades of research, the FN matrix is one of the least understood supra-­‐‑molecular protein assemblies. There have been several attempts to elucidate the exact mechanism of matrix assembly resulting in significant progress in the field but it is still unclear as to what are FN-­‐‑FN interactions, the nature of these interactions and the domains of FN that

are in contact with each other. FN matrix fibrils are elastic in nature. Two models have been proposed to explain the elasticity of the fibrils. The first model: the ‘domain unfolding’ model postulates that the unraveling of FNIII domains under tension explains fibril elasticity.

The second model relies on the conformational change of FN from compact to extended to explain fibril elasticity. FN contain 15 FNIII domains, each a 7-­‐‑strand beta sandwich. Earlier work from our lab used the technique of labeling a buried Cys to study the ‘domain unfolding’ model. They used mutant FNs containing a buried Cys in a single FNIII domain and found that 6 of the 15 FNIII domains label in matrix fibrils. Domain unfolding due to tension, matrix associated conformational changes or spontaneous folding and unfolding are all possible explanation for labeling of the buried Cys. The present study also uses the technique of labeling a buried Cys to address whether it is spontaneous folding and unfolding that labels FNIII domains in cell culture. We used thiol reactive DTNB to measure the kinetics of labeling of buried Cys in eleven FN III domains over a wide range of urea concentrations (0-­‐‑9M). The kinetics data were globally fit using Mathematica. The results are equivalent to those of H-­‐‑D exchange, and

provide a comprehensive analysis of stability and unfolding/folding kinetics of each

domain. For two of the six domains spontaneous folding and unfolding is possibly the reason for labeling in cell culture. For the rest of the four domains it is probably matrix associated conformational changes or tension induced unfolding.

A long-­‐‑standing debate in the protein-­‐‑folding field is whether unfolding rate

constants or folding rate constants correlate to the stability of a protein. FNIII domains all have the same ß sandwich structure but very different stabilities and amino acid sequences. Our study analyzed the kinetics of unfolding and folding and stabilities of eleven FNIII domains and our results show that folding rate constants for FNIII domains are relatively similar and the unfolding rates vary widely and correlate to stability. FN forms a fibrillar matrix and the FN-­‐‑FN interactions during matrix fibril formation are not known. FNI 1-­‐‑9 or the N-­‐‑ terminal region is indispensible for matrix formation and its major binding partner has been shown to be FNIII 2. Earlier work from our lab, using FRET analysis showed that the interaction of FNI 1-­‐‑9 with a destabilized FNIII 2 (missing the G strand, FNIII 2ΔG) reduces the FRET efficiency. This efficiency is restored in the presence of FUD (bacterial adhesion from S. pyogenes) that has been known to interact with FNI 1-­‐‑9 via a tandem ß zipper. In the present study we

use FRET analysis and a series of deletion mutants of FNIII 2ΔG to study the shortest fragment of FNIII 2ΔG that is required to bind FNI 1-­‐‑9. Our results presented here are qualitative and show that FNIII 2ΔC’EFG is the shortest fragment required to bind FNI 1-­‐‑9. Deletion of one more strand abolishes the interaction with FNI 1-­‐‑9.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of research presented in this paper was the material and radiological characterization of high volume fly ash concrete (HVFAC) in terms of determination of natural radionuclide content and radon emanation and exhalation coefficients. All concrete samples were made with a fly ash content between 50% and 70% of the total amount of cementitious materials from one coal burning power plant in Serbia. Physical (fresh and hardened concrete density) and mechanical properties (compressive strength, splitting tensile strength and modulus of elasticity) of concrete were tested. The radionuclide content (226Ra, 232Th and 40K) and radon massic exhalation of HVFAC samples were determined using gamma spectrometry. Determination of massic exhalation rates of HVFAC and its components using radon accumulation chamber techniques combined with a radon monitor was performed. The results show a beneficial effect of pozzolanic activity since the increase in fly ash content resulted in an increase in compressive strength of HVFAC by approximately 20% for the same mass of cement used in the mixtures. On the basis of the obtained radionuclide content of concrete components the I -indices of different HVFAC samples were calculated and compared with measured values (0.27e0.32), which were significantly below the recommended 1.0 index value. The prediction was relatively close to the measured values as the ratio between the calculated and measured I-index ranged between 0.89 and 1.14. Collected results of mechanical and radiological properties and performed calculations clearly prove that all 10 designed concretes with a certain type of fly ash are suitable for structural and non-structural applications both from a material and radiological point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report studied the effect of crumb rubber in the asphalt mixture. The mixtures were also having limestone filler as a modifier. Mastic and mortar (mastic-fine aggregate system) mixture having different quantities of crumb rubber and limestone filler modifiers have been tested in order to find the best rutting resistance combination with an acceptable stiffness. The rheological tests on bituminous mastics and mortars have done in the laboratories in Nottingham Transport Engineering Centre (NTEC) and University of Bologna (DICAM). In the second chapter, an extensive literature review about the binders, additives, asphalt mixtures, various modelling and testing methods have been reviewed. In the third chapter, the physical and rheological properties of the binders have been investigated using both traditional devices and DSRs. The forth chapter is dedicated to finding the behaviour of the modified mastics (Binder-modifier system) with different combinations. Five different combinations of crumb rubber and limestone filler mastic tested with various methods using Dynamic Shear Rheometers. In the fifth chapter, in order to find the effect of the modifiers in the rheological properties of the complete asphalt mixture, the fine aggregates added to the same mastic combinations. In this phase, the behaviour of the system so-called mortar; binder, rubber, filler and fine aggregates) has been studied using the DSR device and the traditional tests. The results show that using fine crumb rubber reduces the thermo sensibility of the mastic (Binder Bitumen System) and improves its elasticity. Limestone filler in the other hand increases the mixture stiffness at high Frequencies. Another important outcome of this research was that the rheological properties of the mortars were following the same trend of the mastics, therefore study the rheological properties of the mastic gives an upright estimation of the mortar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examined how international food price shocks have impacted local ination processes in Brazil, Chile, Colombia, Mexico, and Peru in the past decade -- Using impulse-response analysis coming from cointegrated VARs, we wind that international food ination shocks take from one to six quarters to pass through to domestic head-line ination, depending on the country -- In addition, by calculating the elasticity of local prices to an international food price shock, we found that this pass-through is not complete -- We also take a closer look at how this type of shock affects local food and core prices separately, and asses the possibility second round effects over core ination stemming from the shock -- We wind that a transmission to headline prices does occur, and that part of the transmission is associated with rising core prices both directly and through possible second round effects, which implies a role for monetary policy when such a shock takes place -- This is especially relevant given that international food prices have recently been on an upward trend after falling considerably during the Great Recession

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ten growth or wood-quality traits were assessed in three nearby Corymbia citriodora subsp. variegata (CCV) open-pollinated family-within-provenance trials (18 provenances represented by a total of 374 families) to provide information for the development of a breeding program targeting both pulp and solid-wood products. Growth traits (diameter at breast high over bark [DBH], height and conical volume) were assessed at 3 and 7 years of age. Wood-quality traits (density [DEN], Kraft pulp yield [KPY], modulus of elasticity [MoE] and microfibril angle [MfA]) were predicted using near-infrared spectroscopy on wood samples collected from these trials when aged between 10 and 12 years. The high average KPY, DEN and MoE, and low average MfA observed indicates CCV is very suitable for both pulp and timber products. All traits were under moderate to strong genetic control. In across- trials analyses, high (>0.4) heritability estimates were observed for height, DEN, MoE and MfA, while moderate heritability estimates (0.24 to 0.34) were observed for DBH, volume and KPY. Most traits showed very low levels of genotype × site interaction. Estimated age–age genetic correlations for growth traits were strong at both the family (0.97) and provenance (0.99) levels. Relationships among traits (additive genetic correlation estimates) were favourable, with strong and positive estimates between growth traits (0.84 to 0.98), moderate and positive values between growth and wood-quality traits (0.32 to 0.68), moderate and positive between KPY and MoE (0.64), and high and positive between DEN and MoE (0.82). However, negative (but favourable) correlations were detected between MfA and all other evaluated traits (−0.31 to −0.96). The genetic correlation between the same trait expressed on two different sites, at family level, ranged from 0.24 to 0.42 for growth traits, and from 0.29 to 0.53 for wood traits. Therefore simultaneous genetic improvement of growth and wood property traits in CCV for the target environment in south-east Queensland should be possible, given the moderate to high estimates of heritability and favourable correlations amongst all traits studied, unless genotype × site interactions are greater than was evident. © 2016 NISC (Pty) Ltd

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this study was to analyze evidence of an environmental Kuznets curve for water pollution in the developing and developed countries. The study was conducted based on a panel data set of 54 countries – that were categorized into six groups of “developed countries”, “developing countries”, “developed countries with low income”, “developed countries with high income” and “coastal countries”- between the years 1995 to 2006. The results do not confirm the inverted U-shape of EKC curve for the developed countries with low income. Based on the estimated turning points and the average GDP per capita, the study revealed at which point of the EKC the countries are. Furthermore, impacts of capital-and-labor ratio as well as trade openness are drawn by estimating different models for the EKC. The magnitude role of each explanatory variable on BOD was calculated by estimating the associated elasticity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation describes two studies on macroeconomic trends and cycles. The first chapter studies the impact of Information Technology (IT) on the U.S. labor market. Over the past 30 years, employment and income shares of routine-intensive occupations have declined significantly relative to nonroutine occupations, and the overall U.S. labor income share has declined relative to capital. Furthermore, the decline of routine employment has been largely concentrated during recessions and ensuing recoveries. I build a model of unbalanced growth to assess the role of computerization and IT in driving these labor market trends and cycles. I augment a neoclassical growth model with exogenous IT progress as a form of Routine-Biased Technological Change (RBTC). I show analytically that RBTC causes the overall labor income share to follow a U-shaped time path, as the monotonic decline of routine labor share is increasingly offset by the monotonic rise of nonroutine labor share and the elasticity of substitution between the overall labor and capital declines under IT progress. Quantitatively, the model explains nearly all the divergence between routine and nonroutine labor in the period 1986-2014, as well as the mild decline of the overall labor share between 1986 and the early 2000s. However, the model with IT progress alone cannot explain the accelerated decline of labor income share after the early 2000s, suggesting that other factors, such as globalization, may have played a larger role in this period. Lastly, when nonconvex labor adjustment costs are present, the model generates a stepwise decline in routine labor hours, qualitatively consistent with the data. The timing of these trend adjustments can be significantly affected by aggregate productivity shocks and concentrated in recessions. The second chapter studies the implications of loss aversion on the business cycle dynamics of aggregate consumption and labor hours. Loss aversion refers to the fact that people are distinctively more sensitive to losses than to gains. Loss averse agents are very risk averse around the reference point and exhibit asymmetric responses to positive and negative income shocks. In an otherwise standard Real Business Cycle (RBC) model, I study loss aversion in both consumption alone and consumption-and-leisure together. My results indicate that how loss aversion affects business cycle dynamics depends critically on the nature of the reference point. If, for example, the reference point is status quo, loss aversion dramatically lowers the effective inter-temporal rate of substitution and induces excessive consumption smoothing. In contrast, if the reference point is fixed at a constant level, loss aversion generates a flat region in the decision rules and asymmetric impulse responses to technology shocks. Under a reasonable parametrization, loss aversion has the potential to generate asymmetric business cycles with deeper and more prolonged recessions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is composed of three essays covering two areas of interest. The first topic is personal transportation demand with a focus on price and fuel efficiency elasticities of mileage demand, challenging assumptions common in the rebound effect literature. The second topic is consumer finance with a focus on small loans. The first chapter creates separate variables for fuel prices during periods of increasing and decreasing prices as well as an observed fuel economy measure to empirically test the equivalence of these elasticities. Using a panel from Germany from 1997 to 2009 I find a fuel economy elasticity of mileage of 53.3%, which is significantly different from the gas price elasticity of mileage during periods of decreasing gas prices, 4.8%. I reject the null hypothesis or price symmetry, with the elasticity of mileage during period of increasing gas prices ranging from 26.2% and 28.9%. The second chapter explores the potential for the rebound effect to vary with income. Panel data from U.S. households from 1997 to 2003 is used to estimate the rebound effect in a median regression. The estimated rebound effect independent of income ranges from 17.8% to 23.6%. An interaction of income and fuel economy is negative and significant, indicating that the rebound effect may be much higher for low income individuals and decreases with income; the rebound effect for low income households ranged from 80.3% to 105.0%, indicating that such households may increase gasoline consumption given an improvement in fuel economy. The final chapter documents the costs of credit instruments found in major mail order catalogs throughout the 20th century. This study constructs a new dataset and finds that the cost of credit increased and became stickier as mail order retailers switched from an installment-style closed-end loan to a revolving-style credit card. This study argues that revolving credit's ability to decrease salience of credit costs in the price of goods is the best explanation for rate stickiness in the mail order industry as well as for the preference of revolving credit among retailers.