837 resultados para Population Model
Resumo:
[EN] Background: Spain has gone from a surplus to a shortage of medical doctors in very few years. Medium and long-term planning for health professionals has become a high priority for health authorities. Methods: We created a supply and demand/need simulation model for 43 medical specialties using system dynamics. The model includes demographic, education and labour market variables. Several scenarios were defined. Variables controllable by health planners can be set as parameters to simulate different scenarios. The model calculates the supply and the deficit or surplus. Experts set the ratio of specialists needed per 1000 inhabitants with a Delphi method. Results: In the scenario of the baseline model with moderate population growth, the deficit of medical specialists will grow from 2% at present (2800 specialists) to 14.3% in 2025 (almost 21 000). The specialties with the greatest medium-term shortages are Anesthesiology, Orthopedic and Traumatic Surgery, Pediatric Surgery, Plastic Aesthetic and Reparatory Surgery, Family and Community Medicine, Pediatrics, Radiology, and Urology. Conclusions: The model suggests the need to increase the number of students admitted to medical school. Training itineraries should be redesigned to facilitate mobility among specialties. In the meantime, the need to make more flexible the supply in the short term is being filled by the immigration of physicians from new members of the European Union and from Latin America.
Resumo:
[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...
Resumo:
Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.
Resumo:
The silent demographic revolution characterizing the main industrialized countries is an unavoidable factor which has major economic, social, cultural and psychological implications. This thesis studies the main consequences of population ageing and the connections with the phenomenon of migration, The theoretical analysis is developed using Overlapping Generations Models (OLG). The thesis is divided in the following four chapters: 1) “A Model for Determining Consumption and Social Assistance Demand in Uncertainty Conditions”, focuses on the relation between demographic impact and social insurance and proposes the institution of a non selfsufficiency fund for the elderly. 2) "Population Ageing, Longevity and Health", analyzes the effects of health investment on intertemporal individual behaviour and capital accumulation. 3) "Population Ageing and the Nursing Flow", studies the consequences of migration in the nursing sector. 4) "Quality of Multiculturalism and Minorities' Assimilation", focuses on the problem of assimilation and integration of minorities.
Resumo:
Two Amerindian populations from the Peruvian Amazon (Yanesha) and from rural lowlands of the Argentinean Gran Chaco (Wichi) were analyzed. They represent two case study of the South American genetic variability. The Yanesha represent a model of population isolated for long-time in the Amazon rainforest, characterized by environmental and altitudinal stratifications. The Wichi represent a model of population living in an area recently colonized by European populations (the Criollos are the population of the admixed descendents), whose aim is to depict the native ancestral gene pool and the degree of admixture, in relation to the very high prevalence of Chagas disease. The methods used for the genotyping are common, concerning the Y chromosome markers (male lineage) and the mitochondrial markers (maternal lineage). The determination of the phylogeographic diagnostic polymorphisms was carried out by the classical techniques of PCR, restriction enzymes, sequencing and specific mini-sequencing. New method for the detection of the protozoa Trypanosoma cruzi was developed by means of the nested PCR. The main results show patterns of genetic stratification in Yanesha forest communities, referable to different migrations at different times, estimated by Bayesian analyses. In particular Yanesha were considered as a population of transition between the Amazon basin and the Andean Cordillera, evaluating the potential migration routes and the separation of clusters of community in relation to different genetic bio-ancestry. As the Wichi, the gene pool analyzed appears clearly differentiated by the admixed sympatric Criollos, due to strict social practices (deeply analyzed with the support of cultural anthropological tools) that have preserved the native identity at a diachronic level. A pattern of distribution of the seropositivity in relation to the different phylogenetic lineages (the adaptation in evolutionary terms) does not appear, neither Amerindian nor European, but in relation to environmental and living conditions of the two distinct subpopulations.
Resumo:
The present thesis investigates the issue of work-family conflict and facilitation in a sanitarian contest, using the DISC Model (De Jonge and Dormann, 2003, 2006). The general aim has been declined in two empirical studies reported in this dissertation chapters. Chapter 1 reporting the psychometric properties of the Demand-Induced Strain Compensation Questionnaire. Although the empirical evidence on the DISC Model has received a fair amount of attention in literature both for the theoretical principles and for the instrument developed to display them (DISQ; De Jonge, Dormann, Van Vegchel, Von Nordheim, Dollard, Cotton and Van den Tooren, 2007) there are no studies based solely on psychometric investigation of the instrument. In addition, no previous studies have ever used the DISC as a model or measurement instrument in an Italian context. Thus the first chapter of the present dissertation was based on psychometric investigation of the DISQ. Chapter 2 reporting a longitudinal study contribution. The purpose was to examine, using the DISC model, the relationship between emotional job characteristics, work-family interface and emotional exhaustion among a health care population. We started testing the Triple Match Principle of the DISC Model using solely the emotional dimension of the strain-stress process (i.e. emotional demands, emotional resources and emotional exhaustion). Then we investigated the mediator role played by w-f conflict and w-f facilitation in relation to emotional job characteristics and emotional exhaustion. Finally we compared the mediator model across workers involved in chronic illness home demands and workers who are not involved. Finally, a general conclusion, integrated and discussed the main findings of the studies reported in this dissertation.
Resumo:
In dieser Arbeit wurde die Rolle des Epstein-Barr Virus induzierten Gens 3 in einem Mausmodel des durch B16-F10 Zellen hervorgerufenen metastasierenden Melanoms untersucht. Das von aktivierten antigenpräsentierenden Zellen exprimierte EBI-3 gehört zur Familie der löslichen Typ 1 Zytokinrezeptoren, weist eine hohe Homologie zur p40 Untereinheit des IL-12 auf und bildet zusammen mit p28 das IL-27. Die intravenöse Injektion der B16-F10 Zelllinie führte zu einer signifikanten Erniedrigung der Tumormetastasen in den EBI-3 defizienten Lungen sowie zu einer höheren Lebenserwartung dieser Mäuse im Vergleich zu den B6 Wildtypen. Darüber hinaus habe ich in den EBI-3 defizienten Mäusen eine verminderte VCAM-1 Expression auf den Endothelzellen der Lunge gefunden während Änderungen in der VEGF Expression nicht detektiert wurden. Der immunologische Hintergrund, der diesen therapeutischen Effekt hervorrief, konnte durch die T-Zellaktivierung durch die kürzlich neu beschriebene DC Population, welche Interferon-produzierende Killer Dendritische Zellen genannt werden (IK-DC), die zusätzlich von aktivierten und maturierten klassischen DCs unterstützt wurden, erklärt werden. IK-DCs von EBI-3 defizienten Mäusen produzierten höhere Mengen an IFN-g während die klassischen DCs MHC und co-stimulatorische Moleküle exprimierten, welche die Sekretion von IL-12 initiierten. Das Zusammenspiel der genannten Faktoren induzierte eine verstärkte CD4 und CD8 T-Zellantwort in den Lungen dieser Mäuse. Dies wiederum resultierte im TNF- und TRAIL abhängigen programmierten Zelltod der B16-F10 Melanomzellen in den Lungen der EBI-3 defizienten Mäuse, wohingegen sowohl weitere anti-apoptotische Mechanismen als auch T regulatorische Zellen keinen Einfluss auf die in den EBI-3 defizienten Mäusen beobachtete Tumorabwehr zu spielen scheint. Schlussendlich konnten EBI-3 defiziente CD8+ T-Zellen, welche zuvor mit Tumorantigen geprimed wurden, adoptiv in B6 Wildtypmäuse transferiert werden, was zeigte, dass diese Zellen in der Lage sind, die Tumormasse in den Empfängermäusen signifikant zu verringern. Zusammengefasst, demonstrieren diese Daten, dass das Blockieren von EBI-3 im metastasierenden Melanom ein vielversprechender Angriffspunkt in der Tumortherapie darstellt.
Resumo:
Water is the driving force in nature. We use water for washing cars, doing laundry, cooking, taking a shower, but also to generate energy and electricity. Therefore water is a necessary product in our daily lives (USGS. Howard Perlman, 2013). The model that we created is based on the urban water demand computer model from the Pacific Institute (California). With this model we will forecast the future urban water use of Emilia Romagna up to the year of 2030. We will analyze the urban water demand in Emilia Romagna that includes the 9 provinces: Bologna, Ferrara, Forli-Cesena, Modena, Parma, Piacenza, Ravenna, Reggio Emilia and Rimini. The term urban water refers to the water used in cities and suburbs and in homes in the rural areas. This will include the residential, commercial, institutional and the industrial use. In this research, we will cover the water saving technologies that can help to save water for daily use. We will project what influence these technologies have to the urban water demand, and what it can mean for future urban water demands. The ongoing climate change can reduce the snowpack, and extreme floods or droughts in Italy. The changing climate and development patterns are expected to have a significant impact on water demand in the future. We will do this by conducting different scenario analyses, by combining different population projections, climate influence and water saving technologies. In addition, we will also conduct a sensitivity analyses. The several analyses will show us how future urban water demand is likely respond to changes in water conservation technologies, population, climate, water price and consumption. I hope the research can contribute to the insight of the reader’s thoughts and opinion.
Resumo:
This doctoral thesis is devoted to the study of the causal effects of the maternal smoking on the delivery cost. The interest of economic consequences of smoking in pregnancy have been studied fairly extensively in the USA, and very little is known in European context. To identify the causal relation between different maternal smoking status and the delivery cost in the Emilia-Romagna region two distinct methods were used. The first - geometric multidimensional - is mainly based on the multivariate approach and involves computing and testing the global imbalance, classifying cases in order to generate well-matched comparison groups, and then computing treatment effects. The second - structural modelling - refers to a general methodological account of model-building and model-testing. The main idea of this approach is to decompose the global mechanism into sub-mechanisms though a recursive decomposition of a multivariate distribution.
Resumo:
Seventeen bones (sixteen cadaveric bones and one plastic bone) were used to validate a method for reconstructing a surface model of the proximal femur from 2D X-ray radiographs and a statistical shape model that was constructed from thirty training surface models. Unlike previously introduced validation studies, where surface-based distance errors were used to evaluate the reconstruction accuracy, here we propose to use errors measured based on clinically relevant morphometric parameters. For this purpose, a program was developed to robustly extract those morphometric parameters from the thirty training surface models (training population), from the seventeen surface models reconstructed from X-ray radiographs, and from the seventeen ground truth surface models obtained either by a CT-scan reconstruction method or by a laser-scan reconstruction method. A statistical analysis was then performed to classify the seventeen test bones into two categories: normal cases and outliers. This classification step depends on the measured parameters of the particular test bone. In case all parameters of a test bone were covered by the training population's parameter ranges, this bone is classified as normal bone, otherwise as outlier bone. Our experimental results showed that statistically there was no significant difference between the morphometric parameters extracted from the reconstructed surface models of the normal cases and those extracted from the reconstructed surface models of the outliers. Therefore, our statistical shape model based reconstruction technique can be used to reconstruct not only the surface model of a normal bone but also that of an outlier bone.
Resumo:
We investigate a recently proposed model for decision learning in a population of spiking neurons where synaptic plasticity is modulated by a population signal in addition to reward feedback. For the basic model, binary population decision making based on spike/no-spike coding, a detailed computational analysis is given about how learning performance depends on population size and task complexity. Next, we extend the basic model to n-ary decision making and show that it can also be used in conjunction with other population codes such as rate or even latency coding.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
Learning by reinforcement is important in shaping animal behavior. But behavioral decision making is likely to involve the integration of many synaptic events in space and time. So in using a single reinforcement signal to modulate synaptic plasticity a twofold problem arises. Different synapses will have contributed differently to the behavioral decision and, even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward but by a population feedback signal as well. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second one involves an action sequence which is itself extended in time and reward is only delivered at the last action, as is the case in any type of board-game. The third is the inspection game that has been studied in neuroeconomics. It only has a mixed Nash equilibrium and exemplifies that the model also copes with stochastic reward delivery and the learning of mixed strategies.
Resumo:
We present a model for plasticity induction in reinforcement learning which is based on a cascade of synaptic memory traces. In the cascade of these so called eligibility traces presynaptic input is first corre lated with postsynaptic events, next with the behavioral decisions and finally with the external reinforcement. A population of leaky integrate and fire neurons endowed with this plasticity scheme is studied by simulation on different tasks. For operant co nditioning with delayed reinforcement, learning succeeds even when the delay is so large that the delivered reward reflects the appropriateness, not of the immediately preceeding response, but of a decision made earlier on in the stimulus - decision sequence . So the proposed model does not rely on the temporal contiguity between decision and pertinent reward and thus provides a viable means of addressing the temporal credit assignment problem. In the same task, learning speeds up with increasing population si ze, showing that the plasticity cascade simultaneously addresses the spatial problem of assigning credit to the different population neurons. Simulations on other task such as sequential decision making serve to highlight the robustness of the proposed sch eme and, further, contrast its performance to that of temporal difference based approaches to reinforcement learning.
Resumo:
In this paper we present a new population-based method for the design of bone fixation plates. Standard pre-contoured plates are designed based on the mean shape of a certain population. We propose a computational process to design implants while reducing the amount of required intra-operative shaping, thus reducing the mechanical stresses applied to the plate. A bending and torsion model was used to measure and minimize the necessary intra-operative deformation. The method was applied and validated on a population of 200 femurs that was further augmented with a statistical shape model. The obtained results showed substantial reduction in the bending and torsion needed to shape the new design into any bone in the population when compared to the standard mean-based plates.