977 resultados para policy simulation
Resumo:
This workshop is jointly organized by EFMI Working Groups Security, Safety and Ethics and Personal Portable Devices in cooperation with IMIA Working Group "Security in Health Information Systems". In contemporary healthcare and personal health management the collection and use of personal health information takes place in different contexts and jurisdictions. Global use of health data is also expanding. The approach taken by different experts, health service providers, data subjects and secondary users in understanding privacy and the privacy expectations others may have is strongly context dependent. To make eHealth, global healthcare, mHealth and personal health management successful and to enable fair secondary use of personal health data, it is necessary to find a practical and functional balance between privacy expectations of stakeholder groups. The workshop will highlight these privacy concerns by presenting different cases and approaches. Workshop participants will analyse stakeholder privacy expectations that take place in different real-life contexts such as portable health devices and personal health records, and develop a mechanism to balance them in such a way that global protection of health data and its meaningful use is realized simultaneously. Based on the results of the workshop, initial requirements for a global healthcare information certification framework will be developed.
Resumo:
In this paper, we study the behaviour of the slotted Aloha multiple access scheme with a finite number of users under different traffic loads and optimize the retransmission probability q(r) for various settings, cost objectives and policies. First, we formulate the problem as a parameter optimization problem and use certain efficient smoothed functional algorithms for finding the optimal retransmission probability parameter. Next, we propose two classes of multi-level closed-loop feedback policies (for finding in each case the retransmission probability qr that now depends on the current system state) and apply the above algorithms for finding an optimal policy within each class of policies. While one of the policy classes depends on the number of backlogged nodes in the system, the other depends on the number of time slots since the last successful transmission. The latter policies are more realistic as it is difficult to keep track of the number of backlogged nodes at each instant. We investigate the effect of increasing the number of levels in the feedback policies. Wen also investigate the effects of using different cost functions (withn and without penalization) in our algorithms and the corresponding change in the throughput and delay using these. Both of our algorithms use two-timescale stochastic approximation. One of the algorithms uses one simulation while the other uses two simulations of the system. The two-simulation algorithm is seen to perform better than the other algorithm. Optimal multi-level closed-loop policies are seen to perform better than optimal open-loop policies. The performance further improves when more levels are used in the feedback policies.
Resumo:
Owing to the discrete disclosure practices of the Reserve Bank of Australia, this paper provides new evidence on the channels of monetary policy triggered by central bank actions (monetary policy announcements) and statements (explanatory minutes releases), in the Australian equity market. Both monetary policy announcements and explanatory minutes releases are shown to have a significant and comparable impact on the returns and volatility of the Australian equity market. Further, distinct from US and European studies that find strong evidence of the interest rate, bank loan and balance sheet channels and no evidence of the exchange rate channel following central bank actions, this paper finds that monetary policy impacts the Australian equity market via the exchange rate, interest rate and bank loan channels of monetary policy, with only weak evidence of the balance sheet channel of monetary policy. These channels are found to be operating irrespective of the trigger (monetary policy announcements or explanatory minutes releases), though results are somewhat weaker when examining the explanatory minutes releases. These results have important implications for central bank officials and financial market participants alike: by confirming a comparable avenue to affect monetary policy; and providing an explication of its impact on the Australian equity market.
Resumo:
Fusion power is an appealing source of clean and abundant energy. The radiation resistance of reactor materials is one of the greatest obstacles on the path towards commercial fusion power. These materials are subject to a harsh radiation environment, and cannot fail mechanically or contaminate the fusion plasma. Moreover, for a power plant to be economically viable, the reactor materials must withstand long operation times, with little maintenance. The fusion reactor materials will contain hydrogen and helium, due to deposition from the plasma and nuclear reactions because of energetic neutron irradiation. The first wall divertor materials, carbon and tungsten in existing and planned test reactors, will be subject to intense bombardment of low energy deuterium and helium, which erodes and modifies the surface. All reactor materials, including the structural steel, will suffer irradiation of high energy neutrons, causing displacement cascade damage. Molecular dynamics simulation is a valuable tool for studying irradiation phenomena, such as surface bombardment and the onset of primary damage due to displacement cascades. The governing mechanisms are on the atomic level, and hence not easily studied experimentally. In order to model materials, interatomic potentials are needed to describe the interaction between the atoms. In this thesis, new interatomic potentials were developed for the tungsten-carbon-hydrogen system and for iron-helium and chromium-helium. Thus, the study of previously inaccessible systems was made possible, in particular the effect of H and He on radiation damage. The potentials were based on experimental and ab initio data from the literature, as well as density-functional theory calculations performed in this work. As a model for ferritic steel, iron-chromium with 10% Cr was studied. The difference between Fe and FeCr was shown to be negligible for threshold displacement energies. The properties of small He and He-vacancy clusters in Fe and FeCr were also investigated. The clusters were found to be more mobile and dissociate more rapidly than previously assumed, and the effect of Cr was small. The primary damage formed by displacement cascades was found to be heavily influenced by the presence of He, both in FeCr and W. Many important issues with fusion reactor materials remain poorly understood, and will require a huge effort by the international community. The development of potential models for new materials and the simulations performed in this thesis reveal many interesting features, but also serve as a platform for further studies.
Resumo:
The study examines the origin and development of the Finnish activation policy since the mid-1990s by using the 2001 activation reform as a benchmark. The notion behind activation is to link work obligations to welfare benefits for the unemployed. The focus of the thesis is policy learning and the impact of ideas on the reform of the welfare state. The broader research interests of the thesis are summarized by two groups of questions. First, how was the Finnish activation policy developed and what specific form did it receive in the 2001 activation reform? Second, how does the Finnish activation policy compare to the welfare reforms in the EU and in the US? What kinds of ideas and instruments informed the Finnish policy? To what extent can we talk about a restructuring or transformation of the Nordic welfare policy? Theoretically, the thesis is embedded in the comparative welfare state research and the concepts used in the contemporary welfare state discourse. Activation policy is analysed against the backdrop of the theories about the welfare state, welfare state governance and citizenship. Activation policies are also analysed in the context of the overall modernization and individualization of lifestyles and its implications for the individual citizen. Further, the different perspectives of the policy analysis are applied to determine the role of implementation and street-level practice within the whole. Empirically, the policy design, its implementation and the experiences of the welfare staff and recipients in Finland are examined. The policy development, goals and instruments of the activation policies have followed astonishingly similar paths in the different welfare states and regimes over the last two decades. In Finland, the policy change has been manifested through several successive reforms that have been introduced since the mid-1990s. The 2001 activation reform the Act on Rehabilitative Work Experience illustrates the broader trend towards stricter work requirements and draws its inspiration from the ideas of new paternalism. The ideas, goals and instruments of the international activation trend are clearly visible in the reform. Similarly, the reform has implications for the traditional Nordic social policies, which incorporate institutionalised social rights and the provision of services.
Resumo:
The purpose of this research is to identify the optimal poverty policy for a welfare state. Poverty is defined by income. Policies for reducing poverty are considered primary, and those for reducing inequality secondary. Poverty is seen as a function of the income transfer system within a welfare state. This research presents a method for optimising this function for the purposes of reducing poverty. It is also implemented in the representative population sample within the Income Distribution Data. SOMA simulation model is used. The iterative simulation process is continued until a level of poverty is reached at which improvements can no longer be made. Expenditures and taxes are kept in balance during the process. The result consists of two programmes. The first programme (social assistance programme) was formulated using five social assistance parameters, all of which dealt with the norms of social assistance for adults (€/month). In the second programme (basic benefits programme), in which social assistance was frozen at the legislative level of 2003, the parameter with the strongest poverty reduction effect turned out to be one of the basic unemployment allowances. This was followed by the norm of the national pension for a single person, two parameters related to housing allowance, and the norm for financial aid for students of higher education institutions. The most effective financing parameter measured by gini-coefficient in all programmes was the percent of capital taxation. Furthermore, these programmes can also be examined in relation to their costs. The social assistance programme is significantly cheaper than the basic benefits programme, and therefore with regard to poverty, the social assistance programme is more cost effective than the basic benefits programme. Therefore, public demand for raising the level of basic benefits does not seem to correspond to the most cost effective poverty policy. Raising basic benefits has most effect on reducing poverty within the group of people whose basic benefits are raised. Raising social assistance, on the other hand, seems to have a strong influence on the poverty of all population groups. The most significant outcome of this research is the development of a method through which a welfare state’s income transfer-based safety net, which has severely deteriorated in recent decades, might be mended. The only way of doing so involves either social assistance or some forms of basic benefits and supplementing these by modifying social assistance.
Resumo:
As an emerging research method that has showed promising potential in several research disciplines, simulation received relatively few attention in information systems research. This paper illustrates a framework for employing simulation to study IT value cocreation. Although previous studies identified factors driving IT value cocreation, its underlying process remains unclear. Simulation can address this limitation through exploring such underlying process with computational experiments. The simulation framework in this paper is based on an extended NK model. Agent-based modeling is employed as the theoretical basis for the NK model extensions.
Resumo:
Given Australia’s population ageing and predicted impacts related to health, productivity, equity and enhancing quality of life outcomes for senior Australians, lifelong learning has been identified as a pathway for addressing the risks associated with an ageing population. To date Australian governments have paid little attention to addressing these needs and thus, there is an urgent need for policy development for lifelong learning as a national priority. The purpose of this article is to explore the current lifelong learning context in Australia and to propose a set of factors that are most likely to impact learning in later years.
Resumo:
This paper describes a concept for a collision avoidance system for ships, which is based on model predictive control. A finite set of alternative control behaviors are generated by varying two parameters: offsets to the guidance course angle commanded to the autopilot and changes to the propulsion command ranging from nominal speed to full reverse. Using simulated predictions of the trajectories of the obstacles and ship, compliance with the Convention on the International Regulations for Preventing Collisions at Sea and collision hazards associated with each of the alternative control behaviors are evaluated on a finite prediction horizon, and the optimal control behavior is selected. Robustness to sensing error, predicted obstacle behavior, and environmental conditions can be ensured by evaluating multiple scenarios for each control behavior. The method is conceptually and computationally simple and yet quite versatile as it can account for the dynamics of the ship, the dynamics of the steering and propulsion system, forces due to wind and ocean current, and any number of obstacles. Simulations show that the method is effective and can manage complex scenarios with multiple dynamic obstacles and uncertainty associated with sensors and predictions.
Resumo:
Microwave sources used in present day applications are either multiplied source derived from basic quartz crystals, or frequency synthesizers. The frequency multiplication method increases FM noise power considerably, and has very low efficiency in addition to being very complex and expensive. The complexity and cost involved demands a simple, compact and tunable microwave source. A tunable dielectric resonator oscillator(DRO) is an ideal choice for such applications. In this paper, the simulation, design and realization of a tunable DRO with a center frequency of 6250 MHz is presented. Simulation has been carried out on HP-Ees of CAD software. Mechanical and electronic tuning features are provided. The DRO operates over a frequency range of 6235 MHz to 6375 MHz. The output power is +5.33 dBm at centre frequency. The performance of the DRO is as per design with respect to phase noise, harmonic levels and tunability. and hence, can conveniently be used for the intended applications.
Resumo:
Passive wavelength/time fiber-optic code division multiple access (WIT FO-CDMA) network is a viable option for highspeed access networks. Constructions of 2-D codes, suitable for incoherent WIT FO-CDMA, have been proposed to reduce the time spread of the 1-D sequences. The 2-D constructions can be broadly classified as 1) hybrid codes and 2) matrix codes. In our earlier work [141, we had proposed a new family of wavelength/time multiple-pulses-per-row (W/T MPR) matrix codes which have good cardinality, spectral efficiency and at the same time have the lowest off-peak autocorrelation and cross-correlation values equal to unity. In this paper we propose an architecture for a WIT MPR FO-CDAM network designed using the presently available devices and technology. A complete FO-CDMA network of ten users is simulated, for various number of simultaneous users and shown that 0 --> 1 errors can occur only when the number of interfering users is at least equal to the threshold value.
Resumo:
The thesis examines urban issues arising from the transformation from state socialism to a market economy. The main topics are residential differentiation, i.e., uneven spatial distribution of social groups across urban residential areas, and the effects of housing policy and town planning on urban development. The case study is development in Tallinn, the capital city of Estonia, in the context of development of Central and Eastern European cities under and after socialism. The main body of the thesis consists of four separately published refereed articles. The research question that brings the articles together is how the residential (socio-spatial) pattern of cities developed during the state socialist period and how and why that pattern has changed since the transformation to a market economy began. The first article reviews the literature on residential differentiation in Budapest, Prague, Tallinn and Warsaw under state socialism from the viewpoint of the role of housing policy in the processes of residential differentiation at various stages of the socialist era. The paper shows how the socialist housing provision system produced socio-occupational residential differentiation directly and indirectly and it describes how the residential patterns of these cities developed. The second article is critical of oversimplified accounts of rapid reorganisation of the overall socio-spatial pattern of post-socialist cities and of claims that residential mobility has had a straightforward role in it. The Tallinn case study, consisting of an analysis of the distribution of socio-economic groups across eight city districts and over four housing types in 1999 as well as examining the role of residential mobility in differentiation during the 1990s, provides contrasting evidence. The third article analyses the role and effects of housing policies in Tallinn s residential differentiation. The focus is on contemporary post-privatisation housing-policy measures and their effects. The article shows that the Estonian housing policies do not even aim to reduce, prevent or slow down the harmful effects of the considerable income disparities that are manifest in housing inequality and residential differentiation. The fourth article examines the development of Tallinn s urban planning system 1991-2004 from the viewpoint of what means it has provided the city with to intervene in urban development and how the city has used these tools. The paper finds that despite some recent progress in planning, its role in guiding where and how the city actually developed has so far been limited. Tallinn s urban development is rather initiated and driven by private agents seeking profit from their investment in land. The thesis includes original empirical research in the three articles that analyse development since socialism. The second article employs quantitative data and methods, primarily index calculation, whereas the third and the fourth ones draw from a survey of policy documents combined with interviews with key informants. Keywords: residential differentiation, housing policy, urban planning, post-socialist transformation, Estonia, Tallinn
Resumo:
The increase in drug use and related harms in the late 1990s in Finland has come to be referred to as the second drug wave. In addition to using criminal justice as a basis of drug policy, new kinds of drug regulation were introduced. Some of the new regulation strategies were referred to as "harm reduction". The most widely known practices of harm reduction include needle and syringe exchange programmes for intravenous drug users and medicinal substitution and maintenance treatment programmes for opiate users. The purpose of the study is to examine the change of drug policy in Finland and particularly the political struggle surrounding harm reduction in the context of this change. The aim is, first, to analyse the content of harm reduction policy and the dynamics of its emergence and, second, to assess to what extent harm reduction undermines or threatens traditional drug policy. The concept of harm reduction is typically associated with a drug policy strategy that employs the public health approach and where the principal focus of regulation is on drug-related health harms and risks. On the other hand, harm reduction policy has also been given other interpretations, relating, in particular, to human rights and social equality. In Finland, harm reduction can also be seen to have its roots in criminal policy. The general conclusion of the study is that rather than posing a threat to a prohibitionist drug policy, harm reduction has come to form part of it. The implementation of harm reduction by setting up health counselling centres for drug users with the main focus on needle exchange and by extending substitution treatment has implied the creation of specialised services based on medical expertise and an increasing involvement of the medical profession in addressing drug problems. At the same time the criminal justice control of drug use has been intensified. Accordingly, harm reduction has not entailed a shift to a more liberal drug policy nor has it undermined the traditional policy with its emphasis on total drug prohibition. Instead, harm reduction in combination with a prohibitionist penal policy constitutes a new dual-track drug policy paradigm. The study draws on the constructionist tradition of research on social problems and movements, where the analysis centres on claims made about social problems, claim-makers, ways of making claims and related social mobilisation. The research material mainly consists of administrative documents and interviews with key stakeholders. The doctoral study consists of five original articles and a summary article. The first article gives an overview of the strained process of change of drug policy and policy trends around the turn of the millennium. The second article focuses on the concept of harm reduction and the international organisations and groupings involved in defining it. The third article describes the process that in 1996 97 led to the creation of the first Finnish national drug policy strategy by reconciling mutually contradictory views of addressing the drug problem, at the same as the way was paved for harm reduction measures. The fourth article seeks to explain the relatively rapid diffusion of needle exchange programmes after 1996. The fifth article assesses substitution treatment as a harm reduction measure from the viewpoint of the associations of opioid users and their family members.
Resumo:
This study examines different ways in which the concept of media pluralism has been theorized and used in contemporary media policy debates. Access to a broad range of different political views and cultural expressions is often regarded as a self-evident value in both theoretical and political debates on media and democracy. Opinions on the meaning and nature of media pluralism as a theoretical, political or empirical concept, however, are many, and it can easily be adjusted to different political purposes. The study aims to analyse the ambiguities surrounding the concept of media pluralism in two ways: by deconstructing its normative roots from the perspective of democratic theory, and by examining its different uses, definitions and underlying rationalities in current European media policy debates. The first part of the study examines the values and assumptions behind the notion of media pluralism in the context of different theories of democracy and the public sphere. The second part then analyses and assesses the deployment of the concept in contemporary European policy debates on media ownership and public service media. Finally, the study critically evaluates various attempts to create empirical indicators for measuring media pluralism and discusses their normative implications and underlying rationalities. The analysis of contemporary policy debates indicates that the notion of media pluralism has been too readily reduced to an empty catchphrase or conflated with consumer choice and market competition. In this narrow technocratic logic, pluralism is often unreflectively associated with quantitative data in a way that leaves unexamined key questions about social and political values, democracy, and citizenship. The basic argument advanced in the study is that media pluralism needs to be rescued from its depoliticized uses and re-imagined more broadly as a normative value that refers to the distribution of communicative power in the public sphere. Instead of something that could simply be measured through the number of media outlets available, the study argues that media pluralism should be understood in terms of its ability to challenge inequalities in communicative power and create a more democratic public sphere.
Resumo:
This article proposes a three-timescale simulation based algorithm for solution of infinite horizon Markov Decision Processes (MDPs). We assume a finite state space and discounted cost criterion and adopt the value iteration approach. An approximation of the Dynamic Programming operator T is applied to the value function iterates. This 'approximate' operator is implemented using three timescales, the slowest of which updates the value function iterates. On the middle timescale we perform a gradient search over the feasible action set of each state using Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates, thus finding the minimizing action in T. On the fastest timescale, the 'critic' estimates, over which the gradient search is performed, are obtained. A sketch of convergence explaining the dynamics of the algorithm using associated ODEs is also presented. Numerical experiments on rate based flow control on a bottleneck node using a continuous-time queueing model are performed using the proposed algorithm. The results obtained are verified against classical value iteration where the feasible set is suitably discretized. Over such a discretized setting, a variant of the algorithm of [12] is compared and the proposed algorithm is found to converge faster.