829 resultados para Two Approaches
Resumo:
The Ideal of Volunteerism. An institutional approach to social welfare work in the parishes of the Diocese of Porvoo especially in the deaneries of Iitti and Tampere, Finland, in the years 1897-1923 Social welfare work (also known as diakonia) has achieved a high status in the Evangelical Lutheran Church of Finland. Since 1944, provisions of the Finnish Church Act have obliged each parish to employ at least one deacon or deaconess. This study sets out to examine the background and development of social welfare work in the Evangelical Lutheran Church of Finland from the 1890s to the 1920s, by which time social welfare work had become an established practice in the Church. The study investigates the development of social welfare work on the level of parishes. The main source material was collected from sixteen parishes in the Diocese of Porvoo especially in the deaneries of Iitti and Tampere. In the 1890s, two approaches were used in church social work in Finland. The dioceses of Kuopio, Savonlinna and Turku pursued a congregational approach to social work, while the Diocese of Porvoo employed an institutional approach, mainly because of the influence of Bishop Herman Råbergh. This study charts the formation of church social work in Finnish parishes, which took place during a period of tension between the two approaches. The institutional approach to church social work adopted by the Diocese of Porvoo was based on the German system of Asisters= houses@, in which deaconess institutes sent parish sisters to serve congregations. The parish or, in many cases, a separate association dedicated to church social work paid an annual fee to the deaconess institute, which took care of the parish sisters in old age. In the institutional approach, volunteers were recruited to carry out church social work. It was considered as inappropriate to use tax revenue or other public funding for church social work, which was supposed to be based on Christian love for one=s fellow humans and the needy, and for which only voluntary financial contributions were supposed to be used. In the congregational approach, church social work was directly based on the efforts of the parish. The approach relied on the administrative bodies of parishes and the Church, and tax revenue collected by the parishes, as well as other forms of public funding, could be used to carry out the social welfare work. The parishes employed deacons and deaconesses and paid their salaries. The approaches described above were not pursued in their ideal forms; instead, many variations existed. However, in principle, the social welfare work undertaken by the parishes of the Diocese of Porvoo was based on the institutional approach, while the congregational approach was largely employed elsewhere in Finland. Both of the approaches were viable. Parishes began to employ deacons and deaconesses as of the 1890s. The number of parishes which had hired a deacon or deaconess increased particularly in the 1910s, by which time 60% of parishes had employed one. This level was maintained until 1944 when each parish in the Evangelical Lutheran Church of Finland was obliged to employ a deacon or deaconess. Deaconesses usually worked as travelling nurses. The autonomous status of Finland as part of the Russian Empire did not give Finns the right to develop legislation on social affairs and health care. Consequently, the legislation process did not begin until Finland gained its independence in 1917. The social welfare work carried out by parishes and a number of voluntary organisations satisfied the emerging need for medical treatment in Finnish society. Neither the government nor the municipalities had sufficient resources to provide this treatment. Based on the ideal of volunteerism, the institutional social work practiced in the Diocese of Porvoo ran into serious difficulties at the end of the First World War. Because of severe inflation, prices began to rise as of 1915 and tripled in 1917-1918. During the same period, Finnish society went through a deep crisis which escalated into Civil War in spring 1918. This period of economic and social turmoil marked a turning-point which led to a weakening of the status of institutional social work in parishes. Voluntary efforts were no longer sufficient to maintain the practice. In contrast, congregational social work, which was based on public funding, was able to cope with the changes and survived the crisis. The approach to social work adopted by the Diocese of Porvoo turned out to be no more than a brief detour in the history of social work in the Evangelical Lutheran Church of Finland. At the start of the 1920s, the two approaches were integrated into a common vision for establishing church social work as a statutory practice in parishes.
Resumo:
- Objective To compare health service cost and length of stay between a traditional and an accelerated diagnostic approach to assess acute coronary syndromes (ACS) among patients who presented to the emergency department (ED) of a large tertiary hospital in Australia. - Design, setting and participants This historically controlled study analysed data collected from two independent patient cohorts presenting to the ED with potential ACS. The first cohort of 938 patients was recruited in 2008–2010, and these patients were assessed using the traditional diagnostic approach detailed in the national guideline. The second cohort of 921 patients was recruited in 2011–2013 and was assessed with the accelerated diagnostic approach named the Brisbane protocol. The Brisbane protocol applied early serial troponin testing for patients at 0 and 2 h after presentation to ED, in comparison with 0 and 6 h testing in traditional assessment process. The Brisbane protocol also defined a low-risk group of patients in whom no objective testing was performed. A decision tree model was used to compare the expected cost and length of stay in hospital between two approaches. Probabilistic sensitivity analysis was used to account for model uncertainty. - Results Compared with the traditional diagnostic approach, the Brisbane protocol was associated with reduced expected cost of $1229 (95% CI −$1266 to $5122) and reduced expected length of stay of 26 h (95% CI −14 to 136 h). The Brisbane protocol allowed physicians to discharge a higher proportion of low-risk and intermediate-risk patients from ED within 4 h (72% vs 51%). Results from sensitivity analysis suggested the Brisbane protocol had a high chance of being cost-saving and time-saving. - Conclusions This study provides some evidence of cost savings from a decision to adopt the Brisbane protocol. Benefits would arise for the hospital and for patients and their families.
Resumo:
Environmental variation is a fact of life for all the species on earth: for any population of any particular species, the local environmental conditions are liable to vary in both time and space. In today's world, anthropogenic activity is causing habitat loss and fragmentation for many species, which may profoundly alter the characteristics of environmental variation in remaining habitat. Previous research indicates that, as habitat is lost, the spatial configuration of remaining habitat will increasingly affect the dynamics by which populations are governed. Through the use of mathematical models, this thesis asks how environmental variation interacts with species properties to influence population dynamics, local adaptation, and dispersal evolution. More specifically, we couple continuous-time continuous-space stochastic population dynamic models to landscape models. We manipulate environmental variation via parameters such as mean patch size, patch density, and patch longevity. Among other findings, we show that a mixture of high and low quality habitat is commonly better for a population than uniformly mediocre habitat. This conclusion is justified by purely ecological arguments, yet the positive effects of landscape heterogeneity may be enhanced further by local adaptation, and by the evolution of short-ranged dispersal. The predicted evolutionary responses to environmental variation are complex, however, since they involve numerous conflicting factors. We discuss why the species that have high levels of local adaptation within their ranges may not be the same species that benefit from local adaptation during range expansion. We show how habitat loss can lead to either increased or decreased selection for dispersal depending on the type of habitat and the manner in which it is lost. To study the models, we develop a recent analytical method, Perturbation expansion, to enable the incorporation of environmental variation. Within this context, we use two methods to address evolutionary dynamics: Adaptive dynamics, which assumes mutations occur infrequently so that the ecological and evolutionary timescales can be separated, and via Genotype distributions, which assume mutations are more frequent. The two approaches generally lead to similar predictions yet, exceptionally, we show how the evolutionary response of dispersal behaviour to habitat turnover may qualitatively depend on the mutation rate.
Resumo:
Proximity of molecules is a crucial factor in many solid- state photochemical processes.'S2 The biomolecular photodimerization reactions in the solid state depend on the relative geometry of reactant molecules in the crystal lattice with center-to-center distance of nearest neighbor double bonds of the order of ca. 4 A. This fact emanates from the incisive studies of Schmidt and Cohen.2 One of the two approaches to achieve this distance requirement is the so-called "Crystal-Engineering" of structures, which essentially involves the introduction of certain functional groups that display in-plane interstacking interactions (Cl...Cl, C-He-0, etc.) in the crystal The chloro group is by far the most successful in promoting the /3- packing m ~ d e ,th~o,u~gh recent studies have shown its limitations? Another approach involves the use of constrained media in which the reactants could hopefully be aligned.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.
Resumo:
Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area
Resumo:
ANNE HOLMA ADAPTATION IN TRIADIC BUSINESS RELATIONSHIP SETTINGS – A STUDY IN CORPORATE TRAVEL MANAGEMENT Business-to-business relationships form complicated networks that function in an increasingly dynamic business environment. This study addresses the complexity of business relationships, both when it comes to the core phenomenon under investigation, adaptation, and the structural context of the research, a triadic relationship setting. In business research, adaptation is generally regarded as a dyadic phenomenon, even though it is well recognised that dyads do not exist isolated from the wider network. The triadic approach to business relationships is especially relevant in cases where an intermediary is involved, and where all three actors are directly connected with each other. However, only a few business studies apply the triadic approach. In this study, the three dyadic relationships in triadic relationship settings are investigated in the context of the other two dyads to which each is connected. The focus is on the triads as such, and on the connections between its actors. Theoretically, the study takes its stand in relationship marketing. The study integrates theories and concepts from two approaches, the industrial network approach by the Industrial marketing and purchasing group, and the Service marketing and management approach by the Nordic School. Sociological theories are used to understand the triadic relationship setting. The empirical context of the study is corporate travel management. The study is a retrospective case study, where the data is collected by in-depth interviews with key informants from an industrial enterprise and its travel agency and service supplier partners. The main theoretical contribution of the study concerns opening a new research area in relationship marketing by investigating adaptation in business relationships with a new perspective, and in a new context. This study provides a comprehensive framework to analyse adaptation in triadic business relationship settings. The analysis framework was created with the help of a systematic combining approach, which is based on abductive logic and continuous iteration between the theory and the case study results. The framework describes how adaptations initiate, and how they progress. The framework also takes into account how adaptations spread in triadic relationship settings, i.e. how adaptations attain all three actors of the triad. Furthermore, the framework helps to investigate the outcomes of the adaptations for individual firms, for dyadic relationships, and for the triads. The study also provides concepts and classification that can be used when evaluating adaptation and relationship development in both dyadic and triadic relationships.
Resumo:
Despite thirty years of research in interorganizational networks and project business within the industrial networks approach and relationship marketing, collective capability of networks of business and other interorganizational actors has not been explicitly conceptualized and studied within the above-named approaches. This is despite the fact that the two approaches maintain that networking is one of the core strategies for the long-term survival of market actors. Recently, many scholars within the above-named approaches have emphasized that the survival of market actors is based on the strength of their networks and that inter-firm competition is being replaced by inter-network competition. Furthermore, project business is characterized by the building of goal-oriented, temporary networks whose aims, structures, and procedures are clarified and that are governed by processes of interaction as well as recurrent contracts. This study develops frameworks for studying and analysing collective network capability, i.e. collective capability created for the network of firms. The concept is first justified and positioned within the industrial networks, project business, and relationship marketing schools. An eclectic source of conceptual input is based on four major approaches to interorganizational business relationships. The study uses qualitative research and analysis, and the case report analyses the empirical phenomenon using a large number of qualitative techniques: tables, diagrams, network models, matrices etc. The study shows the high level of uniqueness and complexity of international project business. While perceived psychic distance between the parties may be small due to previous project experiences and the benefit of existing relationships, a varied number of critical events develop due to the economic and local context of the recipient country as well as the coordination demands of the large number of involved actors. The study shows that the successful creation of collective network capability led to the success of the network for the studied project. The processes and structures for creating collective network capability are encapsulated in a model of governance factors for interorganizational networks. The theoretical and management implications are summarized in seven propositions. The core implication is that project business success in unique and complex environments is achieved by accessing the capabilities of a network of actors, and project management in such environments should be built on both contractual and cooperative procedures with local recipient country parties.
Resumo:
The saturated liquid density, varrholr, data along the liquid vapour coexistence curve published in the literature for several cryogenic liquids, hydrocarbons and halocarbon refrigerants are fitted to a generalized equation of the following form varrholr = 1 + A(1 − Tr + B(1 − Tr)β The values of β, the index in phase density differences power law, have been obtained by means of two approaches namely statistical treatment of saturated fluid phase density difference data and the existence of a maximum in T(varrho1 − varrhov) along the saturation curve. Values of the constants A and B are determined utilizing the fact that Tvarrho1 has a maximum at a characteristic temperature T. Values of A, B and β are tabulated for Ne, Ar, Kr, Xe, N2, O2, methane, ethane, propane, iso-butane, n-butane, propylene, ethylene, CO2, water, ammonia, refrigerants-11, 12, 12B1, 13, 13B1, 14, 21, 22, 23, 32, 40, 113, 114, 115, 142b, 152a, 216, 245 and azeotropes R-500, 502, 503, 504. The average error of prediction is less than 2%.
Resumo:
The aim of this study has been to challenge or expand the present views on special education. In a series of six articles this thesis will directly or indirectly debate questions relating to inclusive and exclusive mechanisms in society. It is claimed that the tension between traditionalism and inclusionism within special education may harm the legitimation of special education as a profession of the welfare state. The articles address the relationship between these two approaches. The traditionalism-inclusionism controversy is partly rooted in different ways of understanding the role of special education with respect to democracy. It seems, however, that the traditionalism-inclusionism controversy tends to lead researchers to debate paradigmatic positions with each other than to develop alternative strategies for dealing with the delicate challenge of the differences within education. ---- There are three major areas of this discussion. The first part presents the theory of research programmes as a way of describing the content, the possibilities, and the problems of the different approaches. The main argument is that the concept of research programmes more clearly emphasizes the ethical responsibilities involved in research within the field of special education than does the paradigmatic approach. The second part considers the social aspects of the debate between traditionalism and inclusionism from different perspectives. A central claim made is that the work seen within special education must be understood as a reaction to the social and political world that the profession is part of, and that this also is part of a specific historical development. Even though it is possible to claim that the main aim for special education is to help people that are looked at as disabled or feel disabled, it is also necessary to understand that the profession is highly constrained by the grand narrative of the welfare state and the historical discourse that this profession is part of. The third part focuses on a central aspect of special education: the humanistic solutions towards people who are left behind by ordinary education. The humanistic obligation for special education is part of the general aim of the welfare state to provide an education for a democratic and an inclusive society. This humanistic aim and the goal to offer an education for democracy seem therefore, to dominate the understanding of how special education works.
Resumo:
Hyper-redundant robots are characterized by the presence of a large number of actuated joints, many more than the number required to perform a given task. These robots have been proposed and used for many applications involving avoiding obstacles or, in general, to provide enhanced dexterity in performing tasks. Making effective use of the extra degrees of freedom or resolution of redundancy has been an extensive topic of research and several methods have been proposed in literature. In this paper, we compare three known methods and show that an algorithm based on a classical curve called the tractrix leads to a more 'natural' motion of the hyper-redundant robot, with the displacements diminishing from the end-effector to the fixed base. In addition, since the actuators nearer the base 'see' a greater inertia due to the links farther away, smaller motion of the actuators nearer the base results in better motion of the end-effector as compared to other two approaches. We present simulation and experimental results performed on a prototype eight link planar hyper-redundant manipulator.
Resumo:
The bipolar point spread function (PSF) corresponding to the Wiener filter tor correcting linear-motion-blurred pictures is implemented in a noncoherent optical processor. The following two approaches are taken for this implementation: (1) the PSF is modulated and biased so that the resulting function is non-negative and (2) the PSF is split into its positive and sign-reversed negative parts, and these two parts are dealt with separately. The phase problem associated with arriving at the pupil function from these modified PSFs is solved using both analytical and combined analytical-iterative techniques available in the literature. The designed pupil functions are experimentally implemented, and deblurring in a noncoherent processor is demonstrated. The postprocessing required (i.e., demodulation in the first approach to modulating the PSF and intensity subtraction in the second approach) are carried out either in a coherent processor or with the help of a PC-based vision system. The deblurred outputs are presented.
Resumo:
Precipitation in small droplets involving emulsions, microemulsions or vesicles is important for Producing multicomponent ceramics and nanoparticles. Because of the random nature of nucleation and the small number of particles in a droplet, the use of a deterministic population balance equation for predicting the number density of particles may lead to erroneous results even for evaluating the mean behavior of such systems. A comparison between the predictions made through stochastic simulation and deterministic population balance involving small droplets has been made for two simple systems, one involving crystallization and the other a single-component precipitation. The two approaches have been found to yield quite different results under a variety of conditions. Contrary to expectation, the smallness of the population alone does not cause these deviations. Thus, if fluctuation in supersaturation is negligible, the population balance and simulation predictions concur. However, for large fluctuations in supersaturation, the predictions differ significantly, indicating the need to take the stochastic nature of the phenomenon into account. This paper describes the stochastic treatment of populations, which involves a sequence of so-called product density equations and forms an appropriate framework for handling small systems.
Resumo:
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. When the objects are identical, this problem has been solved which we refer as WCO mechanism. We measure the performance of such mechanisms by the redistribution index. We first prove an impossibility theorem which rules out linear rebate functions with non-zero redistribution index in heterogeneous object assignment. Motivated by this theorem,we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero redistribution index are possible when the valuations for the objects have a certain type of relationship and we design a mechanism with linear rebate function that is worst case optimal. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed. We extend the rebate functions of the WCO mechanism to heterogeneous objects assignment and conjecture them to be worst case optimal.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.