855 resultados para privacy tort, reasonable expectation of privacy, invasion of privacy, workplace privacy legislation
Resumo:
Introduction: Indications for arthrodesis of the first metatarsophalangeal joint (MTP1) are commonly arthrosis (hallux rigidus), rheumatoid arthritis, failed hallux valgus surgery, severe hallux valgus, infectious arthritis, fractures and neuroarthropathies. Many reports focus on technical and radiological issues but few studies emphasize the functional outcome considering daily activities, sports and expectation of the patient. Method: We retrospectively reviewed the patients who underwent MTP1-arthrodesis from 2002 to 2005 in our institution. Clinical and radiological results were assessed but we specially focussed on the functional outcome. Scoring systems used were the SF-12, EQ-5D, PASI, FFI and AOFAS (10 points given to MTP1 mobility) scales. Results: 61 of 64 consecutive patients were evaluated. Female to male ratio was 49:15, mean age at surgery was 67 years, the average follow up was 29 month. Even if radiological consolidation was incomplete in 18 patients, all patients had a clinically stable and rigid arthrodesis. Mean AOFAS score was 87 (24-100) points at follow up. The FFI was 5.91% (0-66%). Patient satisfaction was excellent in 37 patients (60%), good in 18 (30%), fair in 5(8%) and poor in1 (2%). EQ- 5D was 0.7 (0.4-1).40 patients (66%) estimated their cosmetic result as excellent, 15 (25%) as good, 4(6%) as fair and 2 (3%)as poor. 10 patients (16%) had no shoe wear limitation , 48 (79%) had to wear comfortable shoes and 3 (5%) needed orthopaedic wearing. Professionally 34 patients (56%) had better performances, 18 (26%) had no change and 9 (18%) had aggravation of their capacities but this was due to other health reasons. In sports, 16 patients (26%) had better performances, 35 patients (57%) no change and 10 (17%) were worse as consequence of other health problems for 7. Finally, 56 patients (92%) would recommend the operation and 5 (8%) would not. Conclusion: Experience of clinical practice suggests that the idea of fusing the first MTP joint is initially frequently disregarded by the patients because they fear to be limited by a rigid forefoot. Our results show, in fact, that this procedure can be proposed for numerous pathological situations with the perspective of good to excellent outcome in terms of function and quality of life in the majority of cases.
Resumo:
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
Resumo:
Many metropolitan areas have experienced extreme boom-bust cycles over the past century. Some places, like Detroit, grew enormously as industrial powerhouses and then declined, while other older cities, like Boston, seem quite resilient. Education does a reasonable job of explaining urban resilience. In this paper, we present a simple model where education increases the level of entrepreneurship. In this model, human capital spillovers occur at the city level because skilled workers produce more product varieties and thereby increase labor demand. We decompose empirically the causes of the connection between skills and urban success and find that skills are associated with growth in productivity or entrepreneurship, not with growth in quality of life, at least outside of the West. We also find that skills seem to have depressed housing supply growth in the West, but not in other regions, which supports the view that educated residents in that region have fought for tougher land-use controls. We also present evidence that skills have had a disproportionately large impact on unemployment during the current recession.
Resumo:
In this paper we present a simple theory-based measure of the variations in aggregate economic efficiency: the gap between the marginal product of labor and the household s consumption/leisure tradeoff. We show that this indicator corresponds to the inverse of the markup of price over social marginal cost, and give some evidence in support of this interpretation. We then show that, with some auxilliary assumptions our gap variable may be used to measure the efficiency costs of business fluctuations. We find that the latter costs are modest on average. However, to the extent the flexible price equilibrium is distorted, the gross efficiency losses from recessions and gains from booms may be large. Indeed, we find that the major recessions involved large efficiency losses. These results hold for reasonable parameterizations of the Frisch elasticity of labor supply, the coefficient of relative risk aversion, and steady state distortions.
Resumo:
The present study is about the relationship between teacher expectations and student achievement. Do teachers have the power to influence student achievement? This is the question at hand. Are students under the influence of their teachers in regards to how they perceive themselves as achievers and ultimately how well they perform? What are the other factors that come into play when assessing student’s academic achievement? In light of the literature written on this topic, the two most prevalent theories are (1) Pygmalion in the Class and (2) The Sustaining Effect. These theories show a direct and determinant relationship between teacher expectations and student achievement. The main objective of this study was to investigate if in Cape Verde, teachers follow the same trend. Responses to teacher and student surveys carried out at Domingos Ramos High School gave revealing insights into how Capeverdean teachers view their students and the role the teachers themselves play in supporting the studen’s academic performance. Is the teacher’s expectation of their students the last word? In general, teachers do have a powerful influence on their students for good or for bad, but the key questions are: (1) are they aware of this power and (2) how well do they manage it? This paper includes an in-depth discussion on the different factors that influence student achievement and research carried out at an urban secondary school which characterizes how teachers and students view their roles in the student’s academic success. Recommendations are also provided to assist teachers in managing their expectations to maximize their role as a positive contributor to the success of their students.
Resumo:
In this paper we present a simple theory-based measure of the variations in aggregate economic efficiency: the gap between the marginal product of labor and the household s consumption/leisure tradeoff. We show that this indicator corresponds to the inverse of the markup of price over social marginal cost, and give some evidence in support of this interpretation. We then show that, with some auxilliary assumptions our gap variable may be used to measure the efficiency costs of business fluctuations. We find that the latter costs are modest on average. However, to the extent the flexible price equilibrium is distorted,the gross efficiency losses from recessions and gains from booms may be large. Indeed, we find that the major recessions involved large efficiency losses. These results hold for reasonable parameterizations of the Frisch elasticity of labor supply, the coefficient of relative risk aversion, and steady state distortions.
Resumo:
The process of free reserves in a non-life insurance portfolio as defined in the classical model of risk theory is modified by the introduction of dividend policies that set maximum levels for the accumulation of reserves. The first part of the work formulates the quantification of the dividend payments via the expectation of their current value under diferent hypotheses. The second part presents a solution based on a system of linear equations for discrete dividend payments in the case of a constant dividend barrier, illustrated by solving a specific case.
Resumo:
OBJECTIVES: Capillary rarefaction is a hallmark of untreated hypertension. Recent data indicate that rarefaction may be reversed by antihypertensive treatment in nondiabetic hypertensive patients. Despite the frequent association of diabetes with hypertension, nothing is known on the capillary density of treated diabetic patients with hypertension. METHODS: We enrolled 21 normotensive healthy, 25 hypertensive only, and 21 diabetic (type 2) hypertensive subjects. All hypertensive patients were treated with a blocker of the renin-angiotensin system, and a majority had a home blood pressure ≤135/85 mmHg. Capillary density was assessed with videomicroscopy on dorsal finger skin and with laser Doppler imaging on forearm skin (maximal vasodilation elicited by local heating). RESULTS: There was no difference between any of the study groups in either dorsal finger skin capillary density (controls 101 ± 11 capillaries/mm(2) , nondiabetic hypertensive 99 ± 16, diabetic hypertensive 96 ± 18, p > 0.5) or maximal blood flow in forearm skin (controls 666 ± 114 perfusion units, nondiabetic hypertensive 612 ± 126, diabetic hypertensive 620 ± 103, p > 0.5). CONCLUSIONS: Irrespective of the presence or not of type 2 diabetes, capillary density is normal in hypertensive patients with reasonable control of blood pressure achieved with a blocker of the renin-angiotensin system.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
The process of free reserves in a non-life insurance portfolio as defined in the classical model of risk theory is modified by the introduction of dividend policies that set maximum levels for the accumulation of reserves. The first part of the work formulates the quantification of the dividend payments via the expectation of their current value under diferent hypotheses. The second part presents a solution based on a system of linear equations for discrete dividend payments in the case of a constant dividend barrier, illustrated by solving a specific case.
Resumo:
Networks famously epitomize the shift from 'government' to 'governance' as governing structures for exercising control and coordination besides hierarchies and markets. Their distinctive features are their horizontality, the interdependence among member actors and an interactive decision-making style. Networks are expected to increase the problem-solving capacity of political systems in a context of growing social complexity, where political authority is increasingly fragmented across territorial and functional levels. However, very little attention has been given so far to another crucial implication of network governance - that is, the effects of networks on their members. To explore this important question, this article examines the effects of membership in European regulatory networks on two crucial attributes of member agencies, which are in charge of regulating finance, energy, telecommunications and competition: organisational growth and their regulatory powers. Panel analysis applied to data on 118 agencies during a ten-year period and semi-structured interviews provide mixed support regarding the expectation of organisational growth while strongly confirming the positive effect of networks on the increase of the regulatory powers attributed to member agencies.
Resumo:
Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field. Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms. Recent reviews have described the range of assays that have been used for this purpose.(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi). Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes. This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response.
Resumo:
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accuracy or correspondence exists between predicted and monitored performance for Iowa conditions. A comprehensive literature review was conducted to identify the MEPDG input parameters and the MEPDG verification/calibration process. Sensitivities of MEPDG input parameters to predictions were studied using different versions of the MEPDG software. Based on literature review and sensitivity analysis, a detailed verification procedure was developed. A total of sixteen different types of pavement sections across Iowa, not used for national calibration in NCHRP 1-47A, were selected. A database of MEPDG inputs and the actual pavement performance measures for the selected pavement sites were prepared for verification. The accuracy of the MEPDG performance models for Iowa conditions was statistically evaluated. The verification testing showed promising results in terms of MEPDG’s performance prediction accuracy for Iowa conditions. Recalibrating the MEPDG performance models for Iowa conditions is recommended to improve the accuracy of predictions. ****************** Large File**************************
Resumo:
Due to the hazardous nature of chemical asphalt extraction agents, nuclear gauges have become an increasingly popular method of determining the asphalt content of a bituminous mix. This report details the results of comparisons made between intended, tank stick, extracted, and nuclear asphalt content determinations. A total of 315 sets of comparisons were made on samples that represented 110 individual mix designs and 99 paving projects. All samples were taken from 1987 construction projects. In addition to the comparisons made, seventeen asphalt cement samples were recovered for determination of penetration and viscosity. Results were compared to similar tests performed on the asphalt assurance samples in an attempt to determine the amount of asphalt hardening that can be expected due to the hot mix process. Conclusions of the report are: 1. Compared to the reflux extraction procedure, nuclear asphalt content gauges determine asphalt content of bituminous mixes with much greater accuracy and comparable precision. 2. As a means for determining asphalt content, the nuclear procedure should be used as an alternate to chemical extractions whenever possible. 3. Based on penetration and viscosity results, softer grade asphalts undergo a greater degree 'of hardening due to hot mix processing than do harder grades, and asphalt viscosity changes caused by the mixing process are subject to much more variability than are changes in penetration. 4. Based on changes in penetration and viscosity, the Thin Film Oven Test provides a reasonable means of estimating how much asphalt hardening can be anticipated due to exposure to the hot mix processing environment.
Resumo:
Sufficient evidence was not discovered in this brief search to alter the general opinion that the Serviceability (Present Serviceability Index-PSI) - Performance Concepts developed by the AASHO Road Test provides the optimum engineering basis for pavement management. Use of these concepts in Iowa has the additional advantage in that we have a reasonable quantity of historical data over a period of time on the change in pavement condition as measured by PSI's. Some additional benefits would be the ability to better assess our needs with respect to those being recommended to Congress by AASHTO Committees. These concepts have been the basis used for developing policies on dimensions and weight of vehicles and highway needs which the AASHTO Transport Committees have recommended to the United States House Committee on Ways and Means. The first recommendation based on these concepts was made in the mid 1960's. Iowa's participation in the evaluation for this recommendation was under the direction of our present Director of Transportation, Mr. Raymond Kassel. PSI Indexes had to be derived from subjective surface ratings at that time. The most recent recommendation to Congress was made in November of 1977. Based on the rationale expressed above, a pilot study of the major part of the rural interstate system was conducted. The Objective of the study was to measure pavement performance through the use of the Present Serviceability Index (PSI) - Pavement Performance concepts as developed by the AASHO Road Test and to explore the usefulness of this type of data as a pavement management tool. Projects in the vicinity of the major urban centers were not included in this study due to the extra time that would be required to isolate accurate traffic data in these areas. Projects consisting of asphalt surface courses on crushed stone base sections were not included.