931 resultados para the crack extension rate
Resumo:
Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.
Resumo:
The aim of this paper is to build the stated preference method into the social discount rate methodology. The first part of the paper presents the results of a survey about stated time preferences through pair-choice decision situations for various topics and time horizons. It is assumed that stated time preferences differ from calculated time preferences and that the extent of stated rates depends on the time period, and on how much respondents are financially and emotionally involved in the transactions. A significant question remains: how can the gap between the calculation and the results of surveys be resolved, and how can the real time preferences of individuals be interpreted using a social time preference rate. The second part of the paper estimates the social time preference rate for Hungary using the results of the survey, while paying special attention to the pure time preference component. The results suggest that the current method of calculation of the pure time preference rate does not reflect the real attitudes of individuals towards future generations.
Resumo:
This dissertation examines the behavior of the exchange rate under two different scenarios. The first one is characterized by, relatively, low inflation or a situation where prices adjust sluggishly. The second is a high inflation economy where prices respond very rapidly even to unanticipated shocks. In the first one, following a monetary expansion, the exchange rate overshoots, i.e. the nominal exchange rate depreciates at a faster pace than the price level. Under high levels of inflation, prices change faster than the exchange rate so the exchange rate undershoots its long run equilibrium value.^ The standard work in this area, Dornbusch (1976), explains the overshooting process in the context of perfect capital mobility and sluggish adjustment in the goods market. A monetary expansion will make the exchange rate increase beyond its long run equilibrium value. This dissertation expands on Dornbusch's model and provides an analysis of the exchange rate under conditions of currency substitution and price flexibility, characteristics of the Peruvian economy during the hyper inflation process that took place at the end of the 1980's. The results of the modified Dornbusch model reveal that, given a monetary expansion, the change in the price level will be larger than the change in the exchange rate if prices react more than proportionally to the monetary shock.^ We will expect this over-reaction in circumstances of high inflation when the velocity of money is increasing very rapidly. Increasing velocity of money, gives rise to a higher relative price variability which in turn contributes to the appearance of new financial (and also non-financial) instruments that report a higher return than the exchange rate, causing people to switch their demand for foreign exchange to this new assets. In the context of currency substitution, economic agents hoard and use foreign exchange as a store of value. The big decline in output originated by hyper inflation induces people to sell this hoarded money to finance current expenses, increasing the supply of foreign exchange in the market. Both, the decrease in demand and the increase in supply reduce the price of foreign exchange i.e. the real exchange rate. The findings mentioned above are tested using Peruvian data for the period January 1985-July 1990, the results of the econometric estimation confirm our findings in the theoretical model. ^
Resumo:
In response to the increases in pCO2 projected in the 21st century, adult coral growth and calcification are expected to decrease significantly. However, no published studies have investigated the effect of elevated pCO2 on earlier life history stages of corals. Porites astreoides larvae were collected from reefs in Key Largo, Florida, USA, settled and reared in controlled saturation state seawater. Three saturation states were obtained, using 1 M HCl additions, corresponding to present (380 ppm) and projected pCO2 scenarios for the years 2065 (560 ppm) and 2100 (720 ppm). The effect of saturation state on settlement and post-settlement growth was evaluated. Saturation state had no significant effect on percent settlement; however, skeletal extension rate was positively correlated with saturation state, with ~50% and 78% reductions in growth at the mid and high pCO2 treatments compared to controls, respectively.
Resumo:
A new variant of the Element-Free Galerkin (EFG) method, that combines the diffraction method, to characterize the crack tip solution, and the Heaviside enrichment function for representing discontinuity due to a crack, has been used to model crack propagation through non-homogenous materials. In the case of interface crack propagation, the kink angle is predicted by applying the maximum tangential principal stress (MTPS) criterion in conjunction with consideration of the energy release rate (ERR). The MTPS criterion is applied to the crack tip stress field described by both the stress intensity factor (SIF) and the T-stress, which are extracted using the interaction integral method. The proposed EFG method has been developed and applied for 2D case studies involving a crack in an orthotropic material, crack along an interface and a crack terminating at a bi-material interface, under mechanical or thermal loading; this is done to demonstrate the advantages and efficiency of the proposed methodology. The computed SIFs, T-stress and the predicted interface crack kink angles are compared with existing results in the literature and are found to be in good agreement. An example of crack growth through a particle-reinforced composite materials, which may involve crack meandering around the particle, is reported.
Resumo:
This paper investigates the achievable sum-rate of uplink massive multiple-input multiple-output (MIMO) systems considering a practical channel impairment, namely, aged channel state information (CSI). Taking into account both maximum ratio combining (MRC) and zero-forcing (ZF) receivers at the base station, we present tight closed-form lower bounds on the sum-rate for both receivers, which provide efficient means to evaluate the sum-rate of the system. More importantly, we characterize the impact of channel aging on the power scaling law. Specifically, we show that the transmit power of each user can be scaled down by 1/√(M), which indicates that aged CSI does not affect the power scaling law; instead, it causes only a reduction on the sum rate by reducing the effective signal-to-interference-and-noise ratio (SINR).
Resumo:
La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).
Resumo:
In the context of active control of rotating machines, standard optimal controller methods enable a trade-off to be made between (weighted) mean-square vibrations and (weighted) mean-square currents injected into magnetic bearings. One shortcoming of such controllers is that no concern is devoted to the voltages required. In practice, the voltage available imposes a strict limitation on the maximum possible rate of change of control force (force slew rate). This paper removes the aforementioned existing shortcomings of traditional optimal control.
Resumo:
This research arose from the notorious need to promote oral production in the adult learners of the English Extension courses at Universidad del Valle in 2014. This qualitative research was carried out in a 60 hour course divided along 15 sessions on Saturdays, and with an adult population between the ages of 22 and 65 years old. Its main objective was to describe the impact of games aimed at promoting oral production in English with a group of adult learners. Data were collected from one demographic survey, video-recordings of classroom events during the implementation of games, students? surveys after each game and a teacher?s journal. The analysis of data showed that games did have an impact in students? performance which was related to a positive atmosphere in the classroom. Students showed progress in terms of fluency, interaction and even pronunciation; however they still showed difficulties with accuracy in their spontaneous utterances. These learners? achievements seemed to have a relation with the class atmosphere during games where students showed high level of involvement, confidence, mutual support and enjoyment.
Resumo:
The purpose of the research was to investigate cow characteristics, farm facilities, and herd management strategies during the dry period to examine their joint influence on the rate of clinical mastitis after calving. Data were collected over a 2-yr period from 52 commercial dairy farms throughout England and Wales. Cows were separated for analysis into those housed for the dry period (8,710 cow-dry periods) and those at pasture (9,964 cow-dry periods). Multilevel models were used within a Bayesian framework with 2 response variables, the occurrence of a first case of clinical mastitis within the first 30 d of lactation and time to the first case of clinical mastitis during lactation. A variety of cow and herd management factors were identified as being associated with an increased rate of clinical mastitis and these were found to occur throughout the dry period. Significant cow factors were increased parity and at least one somatic cell count ≥200,000 cells/mL in the 90 d before drying off. A number of management factors related to hygiene were significantly associated with an increased rate of clinical mastitis. These included measures linked to the administration of dry-cow treatments and management of the early and late dry-period accommodation and calving areas. Other farm factors associated with a reduced rate of clinical mastitis were vaccination with a leptospirosis vaccine, selection of dry-cow treatments for individual cows within a herd rather than for the herd as a whole, routine body condition scoring of cows at drying off, and a pasture rotation policy of grazing dry cows for a maximum of 2 wk before allowing the pasture to remain nongrazed for a period of 4 wk. Models demonstrated a good ability to predict the farm incidence rate of clinical mastitis in a given year, with model predictions explaining over 85% of the variability in the observed data. The research indicates that specific dry-period management strategies have an important influence on the rate of clinical mastitis during the next lactation.
Resumo:
Cardiovascular disease is one of the leading causes of death around the world. Resting heart rate has been shown to be a strong and independent risk marker for adverse cardiovascular events and mortality, and yet its role as a predictor of risk is somewhat overlooked in clinical practice. With the aim of highlighting its prognostic value, the role of resting heart rate as a risk marker for death and other adverse outcomes was further examined in a number of different patient populations. A systematic review of studies that previously assessed the prognostic value of resting heart rate for mortality and other adverse cardiovascular outcomes was presented. New analyses of nine clinical trials were carried out. Both the original and extended Cox model that allows for analysis of time-dependent covariates were used to evaluate and compare the predictive value of baseline and time-updated heart rate measurements for adverse outcomes in the CAPRICORN, EUROPA, PROSPER, PERFORM, BEAUTIFUL and SHIFT populations. Pooled individual patient meta-analyses of the CAPRICORN, EPHESUS, OPTIMAAL and VALIANT trials, and the BEAUTIFUL and SHIFT trials, were also performed. The discrimination and calibration of the models applied were evaluated using Harrell’s C-statistic and likelihood ratio tests, respectively. Finally, following on from the systematic review, meta-analyses of the relation between baseline and time-updated heart rate, and the risk of death from any cause and from cardiovascular causes, were conducted. Both elevated baseline and time-updated resting heart rates were found to be associated with an increase in the risk of mortality and other adverse cardiovascular events in all of the populations analysed. In some cases, elevated time-updated heart rate was associated with risk of events where baseline heart rate was not. Time-updated heart rate also contributed additional information about the risk of certain events despite knowledge of baseline heart rate or previous heart rate measurements. The addition of resting heart rate to the models where resting heart rate was found to be associated with risk of outcome improved both discrimination and calibration, and in general, the models including time-updated heart rate along with baseline or the previous heart rate measurement had the highest and similar C-statistics, and thus the greatest discriminative ability. The meta-analyses demonstrated that a 5bpm higher baseline heart rate was associated with a 7.9% and an 8.0% increase in the risk of all-cause and cardiovascular death, respectively (both p less than 0.001). Additionally, a 5bpm higher time-updated heart rate (adjusted for baseline heart rate in eight of the ten studies included in the analyses) was associated with a 12.8% (p less than 0.001) and a 10.9% (p less than 0.001) increase in the risk of all-cause and cardiovascular death, respectively. These findings may motivate health care professionals to routinely assess resting heart rate in order to identify individuals at a higher risk of adverse events. The fact that the addition of time-updated resting heart rate improved the discrimination and calibration of models for certain outcomes, even if only modestly, strengthens the case that it be added to traditional risk models. The findings, however, are of particular importance, and have greater implications for the clinical management of patients with pre-existing disease. An elevated, or increasing heart rate over time could be used as a tool, potentially alongside other established risk scores, to help doctors identify patient deterioration or those at higher risk, who might benefit from more intensive monitoring or treatment re-evaluation. Further exploration of the role of continuous recording of resting heart rate, say, when patients are at home, would be informative. In addition, investigation into the cost-effectiveness and optimal frequency of resting heart rate measurement is required. One of the most vital areas for future research is the definition of an objective cut-off value for the definition of a high resting heart rate.
Resumo:
We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.
Resumo:
For more than two decades we have witnessed in Latin America –in Argentina particularly– the development of policies to expand the school day. We understand that the implementation of such policies is an opportunity to observe the behavior of the school’s behavior faced with the attempt to modify one of its hardest components –school-time–; it becomes also a natural laboratory to analyze how much does the traditional organization of school-time can resist, how does it change and how do these changes (if implemented) impact the rest of the school components (spaces, groups, etc.). This paper shows the state of the art of the most significant studies in two research fields, in the context of primary education, on this matter: on the one hand, the studies related to organization and extension of school time and, on the other hand, research on the structural and structuring components of school-related aspects. The literature review indicates that studies on school-time and on the corresponding extension policies and programs do not report the difficulties found when trying to modify the hard components of the school system. Studies with the ‘school system’ as object of study have not approached the numerous school-time extension experiences, although time is one of the structural elements of the system.
Resumo:
In Australia, between 1994 and 2000, 50 construction workers were killed each year as a result of their work, the industry fatality rate, at 10.4 per 100,000 persons, is similar to the national road toll fatality rate and the rate of serious injury is 50% higher than the all industries average. This poor performance represents a significant threat to the industry’s social sustainability. Despite the best efforts of regulators and policy makers at both State and Federal levels, the incidence of death, injury and illness in the Australian construction industry has remained intransigently high, prompting an industry-led initiative to improve the occupational health and safety (OHS) performance of the Australian construction industry. The ‘Safer Construction’ project involves the development of an evidence-based Voluntary Code of Practice for OHS in the industry.
Resumo:
BIM (Building Information Modelling) is an approach that involves applying and maintaining an integral digital representation of all building information for different phases of the project lifecycle. This paper presents an analysis of the current state of BIM in the industry and a re-assessment of its role and potential contribution in the near future, given the apparent slow rate of adoption by the industry. The paper analyses the readiness of the building industry with respect to the product, processes and people to present an argument on where the expectations from BIM and its adoption may have been misplaced. This paper reports on the findings from: (1) a critical review of latest BIM literature and commercial applications, and (2) workshops with focus groups on changing work-practice, role of technology, current perceptions and expectations of BIM.