836 resultados para Random time change
Resumo:
Introduction: Comprehensive undergraduate education in clinical sciences is grounded on activities developed during clerkships. To implement the credits system we must know how these experiences take place. Objectives: to describe how students spend time in clerkships, how they assess the educative value of activities and the enjoyment it provides. Method: We distributed a form to a random clustered sample of a 100 students coursing clinical sciences, designed to record the time spent, and to assess the educative value and the grade of enjoyment of the activities in clerkship during a week. Data were registered and analyzed on Excel® 98 and SPSS. Results: mean time spent by students in clerkship activities on a day were 10.8 hours. Of those, 7.3 hours (69%) were spent in formal education activities. Patient care activities with teachers occupied the major proportion of time (15.4%). Of the teaching and learning activities in a week, 28 hours (56%) were spent in patient care activities and 22.4 hours (44.5%) were used in independent academic work. The time spent in teaching and learning activities correspond to 19 credits of a semester of 18 weeks. The activities assessed as having the major educational value were homework activities (4.6) and formal education activities (4.5). The graded as most enjoyable were extracurricular activities, formal educational activities and independent academic work. Conclusion: our students spend more time in activities with patients than the reported in literature. The attending workload of our students is greater than the one reported in similar studies.
Resumo:
En junio de 2000 el Departamento Nacional de Estadística de Colombia adopto una nueva definición de medición de desempleo siguiendo los estándares sugeridos por la organización Internacional del Trabajo (OIT). El cambio de definición implico una reducción de la tasa de desempleo en cerca de dos puntos porcentuales. En este documento contrastamos la experiencia colombiana con otra experiencias internacionales, y analizamos las implicaciones empíricas y teóricas de este cambio de definición usando dos tipos de estimaciones cuantitativas: en la primera se contrasta las principales características de las diferentes categorías clasificadas según la definición nueva y vieja de desempleo (empleado, desempleado y fuera de la fuerza laboral) usando el algoritmo EM; en la segunda se pone a prueba la implicación del desempleo estructural y su relación con el perfil educacional de personas desempleadas y las características teóricas que enfrentan los estándares de la OIT en la definición de empleo.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
The objective of this paper is to introduce a diVerent approach, called the ecological-longitudinal, to carrying out pooled analysis in time series ecological studies. Because it gives a larger number of data points and, hence, increases the statistical power of the analysis, this approach, unlike conventional ones, allows the complementation of aspects such as accommodation of random effect models, of lags, of interaction between pollutants and between pollutants and meteorological variables, that are hardly implemented in conventional approaches. Design—The approach is illustrated by providing quantitative estimates of the short-termeVects of air pollution on mortality in three Spanish cities, Barcelona,Valencia and Vigo, for the period 1992–1994. Because the dependent variable was a count, a Poisson generalised linear model was first specified. Several modelling issues are worth mentioning. Firstly, because the relations between mortality and explanatory variables were nonlinear, cubic splines were used for covariate control, leading to a generalised additive model, GAM. Secondly, the effects of the predictors on the response were allowed to occur with some lag. Thirdly, the residual autocorrelation, because of imperfect control, was controlled for by means of an autoregressive Poisson GAM. Finally, the longitudinal design demanded the consideration of the existence of individual heterogeneity, requiring the consideration of mixed models. Main results—The estimates of the relative risks obtained from the individual analyses varied across cities, particularly those associated with sulphur dioxide. The highest relative risks corresponded to black smoke in Valencia. These estimates were higher than those obtained from the ecological-longitudinal analysis. Relative risks estimated from this latter analysis were practically identical across cities, 1.00638 (95% confidence intervals 1.0002, 1.0011) for a black smoke increase of 10 μg/m3 and 1.00415 (95% CI 1.0001, 1.0007) for a increase of 10 μg/m3 of sulphur dioxide. Because the statistical power is higher than in the individual analysis more interactions were statistically significant,especially those among air pollutants and meteorological variables. Conclusions—Air pollutant levels were related to mortality in the three cities of the study, Barcelona, Valencia and Vigo. These results were consistent with similar studies in other cities, with other multicentric studies and coherent with both, previous individual, for each city, and multicentric studies for all three cities
Resumo:
This paper explores the extent to which the illusive phenomenon of workplace innovation has pervaded workplaces in Europe and whether it could be one of the answers to Europe’s longterm social and economic challenges that stem from an ageing workforce and the need for more flexibility to stay competitive. Basic data drawn from European Working Conditions Survey conducted every five years by the Dublin-based European Foundation for the Improvement of Living and Working Conditions are supplemented by a series of case studies to look at the problems encountered in introducing workplace innovation and possible solutions. One set of case studies examines the following organisations: SGI/GI (Slovak Governance Institute (Slovakia), as representative of the world of small- and medium-sized enterprises; Oticon (Denmark) as representative of manufacturing companies; the Open University (UK), as representative of educational organizations; and FPS Social Security (Belgium) representing the public sector. Two final case studies focus on the country-level, one looking at of how a specific innovation can become fully mainstreamed (in the Netherlands and the ‘part-time economy’) and the other (Finland and TEKES) looking at how a government programme can help disseminate workplace innovation. These six case studies, together with the statistical analysis, constitute the main empirical value added of the report.
Resumo:
This paper focuses on the role of the European Union (EU) in the formation of India’s climate change policy; an increasingly high profile issue area. It is based on an extensive study of relevant literature, EU-India policy documents and the execution of thirteen semi-structured interviews with experts; many of whom have experienced EU-India cooperation on climate change first-hand. A three-point typology will be used to assess the extent of the EU’s leadership role, supporting role or equal partnership role in India, with several sub-roles within these categories. Further, for clarity and chronology purposes, three time periods will be distinguished to assess how India’s climate policy has evolved over time, alongside the EU’s role within that. The findings of the paper confirm that the EU has demonstrated signs of all three roles to some degree, although the EU-India relationship in climate policy is increasingly an equal partnership. It offers explanations for previous shortcomings in EU-India climate policy as well as policy recommendations to help ensure more effective cooperation and implementation of policies.
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
Climate model simulations consistently show that in response to greenhouse gas forcing surface temperatures over land increase more rapidly than over sea. The enhanced warming over land is not simply a transient effect, since it is also present in equilibrium conditions. We examine 20 models from the IPCC AR4 database. The global land/sea warming ratio varies in the range 1.36–1.84, independent of global mean temperature change. In the presence of increasing radiative forcing, the warming ratio for a single model is fairly constant in time, implying that the land/sea temperature difference increases with time. The warming ratio varies with latitude, with a minimum in equatorial latitudes, and maxima in the subtropics. A simple explanation for these findings is provided, and comparisons are made with observations. For the low-latitude (40°S–40°N) mean, the models suggest a warming ratio of 1.51 ± 0.13, while recent observations suggest a ratio of 1.54 ± 0.09.
Resumo:
Under anthropogenic climate change it is possible that the increased radiative forcing and associated changes in mean climate may affect the “dynamical equilibrium” of the climate system; leading to a change in the relative dominance of different modes of natural variability, the characteristics of their patterns or their behavior in the time domain. Here we use multi-century integrations of version three of the Hadley Centre atmosphere model coupled to a mixed layer ocean to examine potential changes in atmosphere-surface ocean modes of variability. After first evaluating the simulated modes of Northern Hemisphere winter surface temperature and geopotential height against observations, we examine their behavior under an idealized equilibrium doubling of atmospheric CO2. We find no significant changes in the order of dominance, the spatial patterns or the associated time series of the modes. Having established that the dynamic equilibrium is preserved in the model on doubling of CO2, we go on to examine the temperature pattern of mean climate change in terms of the modes of variability; the motivation being that the pattern of change might be explicable in terms of changes in the amount of time the system resides in a particular mode. In addition, if the two are closely related, we might be able to assess the relative credibility of different spatial patterns of climate change from different models (or model versions) by assessing their representation of variability. Significant shifts do appear to occur in the mean position of residence when examining a truncated set of the leading order modes. However, on examining the complete spectrum of modes, it is found that the mean climate change pattern is close to orthogonal to all of the modes and the large shifts are a manifestation of this orthogonality. The results suggest that care should be exercised in using a truncated set of variability EOFs to evaluate climate change signals.
Resumo:
Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.
Resumo:
A modelling study has been undertaken to assess the likely impacts of climate change on water quality across the UK. A range of climate change scenarios have been used to generate future precipitation, evaporation and temperature time series at a range of catchments across the UK. These time series have then been used to drive the Integrated Catchment (INCA) suite of flow, water quality and ecological models to simulate flow, nitrate, ammonia, total and soluble reactive phosphorus, sediments, macrophytes and epiphytes in the Rivers Tamar, Lugg, Tame, Kennet, Tweed and Lambourn. A wide range of responses have been obtained with impacts varying depending on river character, catchment location, flow regime, type of scenario and the time into the future. Essentially upland reaches of river will respond differently to lowland reaches of river, and the responses will vary depending on the water quality parameter of interest.
Resumo:
The purpose of Research Theme 4 (RT4) was to advance understanding of the basic science issues at the heart of the ENSEMBLES project, focusing on the key processes that govern climate variability and change, and that determine the predictability of climate. Particular attention was given to understanding linear and non-linear feedbacks that may lead to climate surprises,and to understanding the factors that govern the probability of extreme events. Improved understanding of these issues will contribute significantly to the quantification and reduction of uncertainty in seasonal to decadal predictions and projections of climate change. RT4 exploited the ENSEMBLES integrations (stream 1) performed in RT2A as well as undertaking its own experimentation to explore key processes within the climate system. It was working at the cutting edge of problems related to climate feedbacks, the interaction between climate variability and climate change � especially how climate change pertains to extreme events, and the predictability of the climate system on a range of time-scales. The statisticalmethodologies developed for extreme event analysis are new and state-of-the-art. The RT4-coordinated experiments, which have been conducted with six different atmospheric GCMs forced by common timeinvariant sea surface temperature (SST) and sea-ice fields (removing some sources of inter-model variability), are designed to help to understand model uncertainty (rather than scenario or initial condition uncertainty) in predictions of the response to greenhouse-gas-induced warming. RT4 links strongly with RT5 on the evaluation of the ENSEMBLES prediction system and feeds back its results to RT1 to guide improvements in the Earth system models and, through its research on predictability, to steer the development of methods for initialising the ensembles
Resumo:
Our understanding of the climate system has been revolutionized recently, by the development of sophisticated computer models. The predictions of such models are used to formulate international protocols, intended to mitigate the severity of global warming and its impacts. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. The purpose of this paper is to give a personal perspective of the current state of knowledge regarding the problem of unresolved scales in climate models. A recent novel solution to the problem is discussed, in which it is proposed, somewhat counter-intuitively, that the performance of models may be improved by adding random noise to represent the unresolved processes.
Resumo:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Resumo:
Many lowland rivers across northwest Europe exhibit broadly similar behavioural responses to glacial-interglacial transitions and landscape development. Difficulties exist in assessing these, largely because the evidence from many rivers remains limited and fragmentary. Here we address this issue in the context of the river Kennet, a tributary of the Thames, since c. 13,000 cal BP. Some similarities with other rivers are present, suggesting that regional climatic shifts are important controls. The Kennet differs from the regional pattern in a number of ways. The rate of response to sudden climatic change, particularly at the start of the Holocene and also mid-Holocene forest clearance, appears very high. This may reflect abrupt shifts between two catchment scale hydrological states arising from contemporary climates, land use change and geology. Stadial hydrology is dominated by nival regimes, with limited winter infiltration and high spring and summer runoff. Under an interglacial climate, infiltration is more significant. The probable absence of permafrost in the catchment means that a lag between the two states due to its gradual decay is unlikely. Palaeoecology, supported by radiocarbon dates, suggests that, at the very start of the Holocene, a dramatic episode of fine sediment deposition across most of the valley floor occurred, lasting 500-1000 years. A phase of peat accumulation followed as mineral sediment supply declined. A further shift led to tufa deposition, initially in small pools, then across the whole floodplain area, with the river flowing through channels cut in tufa and experiencing repeated avulsion. Major floods, leaving large gravel bars that still form positive relief features on the floodplain, followed mid-Holocene floodplain stability. Prehistoric deforestation is likely to be the cause of this flooding, inducing a major environmental shift with significantly increased surface runoff. Since the Bronze Age, predominantly fine sediments were deposited along the valley with apparently stable channels and vertical floodplain accretion associated with soil erosion and less catastrophic flooding. The Kennet demonstrates that, while a general pattern of river behaviour over time, within a region, may be identifiable, individual rivers are likely to diverge from this. Consequently, it is essential to understand catchment controls, particularly the relative significance of surface and subsurface hydrology. (c) 2005 Elsevier B.V. All rights reserved.