310 resultados para collector moment
Resumo:
Inverse dynamics is the most comprehensive method that gives access to the net joint forces and moments during walking. However it is based on assumptions (i.e., rigid segments linked by ideal joints) and it is known to be sensitive to the input data (e.g., kinematic derivatives, positions of joint centres and centre of pressure, inertial parameters). Alternatively, transducers can be used to measure directly the load applied on the residuum of transfemoral amputees. So, the purpose of this study was to compare the forces and moments applied on a prosthetic knee measured directly with the ones calculated by three inverse dynamics computations - corresponding to 3 and 2 segments, and « ground reaction vector technique » - during the gait of one patient. The maximum RMSEs between the estimated and directly measured forces (i.e., 56 N) and moment (i.e., 5 N.m) were relatively small. However the dynamic outcomes of the prosthetic components (i.e., absorption of the foot, friction and limit stop of the knee) were only partially assessed with inverse dynamic methods.
Resumo:
Objects have consequences, seemingly. They move, atomic, formlessly – when static they are seen. That they vibrate constantly, that they are NOW present, is something we will have to trust the physicists on. They only seem here. Now is their moment of form, but later, who knows? Things SEEM when we recognise our own transience and temporary-ness. We call upon a bevy of senses that forever frustrate us with their limitation, despite our little understanding of what we actually have – is this here? So some forms seem to be telling us to trust our senses – that this world IS as it seems. Their form constantly refines and is refined and refined until in its essentialness it cannot be doubted – it absolutely IS. Is this our eyes? Can we only see it? But light is also a particle, if I remember correctly, so there is some weight to seeing. So to SEEM is also to FEEL,as this light imposes its visual weight upon our skins – we see with every pore of our body.
The relationship between clinical outcomes and quality of life for residents of aged care facilities
Resumo:
Objectives It is widely assumed improving care in residential facilities will improve quality of life (QoL), but little research has explored this relationship. The Clinical Care Indicators (CCI) Tool was developed to fill an existing gap in quality assessment within Australian residential aged care facilities and it was used to explore potential links between clinical outcomes and QoL. Design and Setting Clinical outcome and QoL data were collected within four residential facilities from the same aged care provider. Subjects Subjects were 82 residents of four facilities. Outcome Measures Clinical outcomes were measured using the CCI Tool and QoL data was obtained using the Australian WHOQOL‑100. Results Independent t‑test analyses were calculated to compare individual CCIs with each domain of the WHOQOL‑100, while Pearson’s product moment coefficients (r) were calculated between the total number of problem indicators and QoL scores. Significant results suggested poorer clinical outcomes adversely affected QoL. Social and spiritual QoL were particularly affected by clinical outcomes and poorer status in hydration, falls and depression were most strongly associated with lower QoL scores. Poorer clinical status as a whole was also significantly correlated with poorer QoL. Conclusions Hydration, falls and depression were most often associated with poorer resident QoL and as such appear to be key areas for clinical management in residential aged care. However, poor clinical outcomes overall also adversely affected QoL, which suggests maintaining optimum clinical status through high quality nursing care, would not only be important for resident health but also for enhancing general life quality.
Resumo:
The drought Australia now faces is leading to shifts in the perception of the continent, of Australians and the world. The ideals of lush green landscapes are making way for landscape designs in which dryness is a quality of the design. On a map of the world, Australia is enormous, and seems empty because development is concentrated around its edges. Its heart must be red, in the cultural projections of the world from images of Uluru, 'the rock', set in a flat desert with no relief. Of course the country is not really all desert - surely? - with low shrubs pretty much throughout. Inhabitation seems to cling to the edges where teh continent feels microclimatic effects from the adjacent oceans and edging mountain ranges, which screen the population from the real state of the environment - dry, harsh, amazing and unique. Australia is rightly proud of this harsh difference from its edges, but prefers the harshness to be 'out there'. At the moment however, the country is pretty much universally in drought, and the contrast between green and brown, that it has celebrated, even built its identity around, is disappearing to become brown throughout. Without the browning of Australia, some areas, such as tropical Queensland, are having their designed public landscapes and gardens revealed as an elaborate mythology, a landscape fraud.
Resumo:
This article explores articulations of queer identity in recent Australian queer student media. Print media is of particular importance to queer communities because, as Cover argues, it provides a crucial grounding for community development and a model of queer to guide the positioning of identity and activism. This article uses discourse analysis of queer student activists’ media representations of diversity and inclusiveness to investigate the articulations of queer identity in one specific context: metropolitan Australian universities. This reveals real-life appropriations of this contentious term and contributes to a genealogy of sexuality, documenting one visible moment in history.
Resumo:
This is a review of "Capitalism, socialism, and democracy", by Joseph A. Schumpeter, New York, Harper Perennial, 1942 (first Harper Colophon edition published 1975). "The public mind has by now so thoroughly grown out of humor with it as to make condemnation of capitalism and all its works a foregone conclusion – almost a requirement of the etiquette of discussion. Whatever his political preference, every writer or speaker hastens to conform to this code and to emphasize his critical attitude, his freedom from ‘complacency’, his belief in the inadequacies of capitalist achievement, his aversion to capitalist and his sympathy with anti-capitalist interests. Any other attitude is voted not only foolish but anti-social and is looked upon as an indication of immoral servitude." We might easily mistake this for a voice weary of contemplating the implications for neo-liberal nostrums of our current global financial crisis were it not for the rather formal, slightly arch, style and the gender exclusive language. It was in fact penned in the depths of World War II by Harvard economist Joseph Schumpeter, who fell off the map only to re-emerge from the 1970s as oil shocks and stagflation in the west presaged the decline of the Keynesian settlement, as east Asian newly industrialising economies were modelling on his insistence that entrepreneurialism, access to credit and trade were the pillars of economic growth, and as innovation became more of a watchword for post-industrial economies in general. The second coming was perhaps affirmed when his work was dubbed by Forbes in 1983 – on the occasion of the 100th anniversary of the birth of both men – as of greater explanatory import than Keynes’. (And what of our present resurgent Keynesian moment?)...
Resumo:
President’s Message Hello fellow AITPM members, We’ve been offered a lot of press lately about the Federal Government’s plan for the multibillion dollar rollout of its high speed broadband network, which at the moment is being rated to a speed of 100Mb/s. This seems fantastic in comparison to the not atypical 250 to 500kb/s that I receive on my metropolitan cable broadband, which incidentally my service provider rates at theoretical speeds of up to 8 Mb/s. I have no doubt that such a scheme will generate significant advantages to business and consumers. However, I also have some reservations. Only a few of years ago I marvelled at my first 256Mb USB stick, which cost my employer about $90. Last month I purchased a 16Gb stick with a free computer carry bag for $80, which on the back of my envelope has given me about 72 times the value of my first USB stick not including the carry bag! I am pretty sure the technology industry will find a way to eventually push a lot more than 100Mb/s down the optic fibre network just as they have done with pushing several Mb/s ADSL2 down antique copper wire. This makes me wonder about the general problem of inbuilt obsolescence of all things high-tech due to rapid advances in the tech industry. As a transport professional I then think to myself that our industry has been moving forward at somewhat of a slower pace. We certainly have had major milestones having significant impacts, such as the move from horse and cart to the self propelled motor vehicle, sealing and formal geometric design of roads, development of motorways, signalisation of intersections, coordination of networks, to simulation modelling for real time adaptive control (perhaps major change has been at a frequency of 30 years or so?). But now with ITS truly penetrating the transport market, largely thanks to the in-car GPS navigator, smart phone, e-toll and e-ticket, I believe that to avoid our own obsolescence we’re going to need to “plan for ITS” rather than just what we seem to have been doing up until now, that is, to get it out there. And we’ll likely need to do it at a faster pace. It will involve understanding how to data mine enormous data sets, better understanding the human/machine interface, keeping pace with automotive technology more closely, resolving the ethical and privacy chestnuts, and in the main actually planning for ITS to make peoples’ lives easier rather than harder. And in amongst this we’ll need to keep pace with the types of technology advances similar to my USB stick example above. All the while we’ll be making a brand new set of friends in the disciplines that will morph into ITS along with us. Hopefully these will all be “good” problems for our profession to have. I should close in reminding everyone again that AITPM’s flagship event, the 2009 AITPM National Conference, Traffic Beyond Tomorrow, is being held in Adelaide from 5 to 7 August. www.aitpm.com has all of the details about how to register, sponsor a booth, session, etc. Best regards all, Jon Bunker
Resumo:
This report focuses on risk-assessment practices in the private rental market, with particular consideration of their impact on low-income renters. It is based on the fieldwork undertaken in the second stage of the research process that followed completion of the Positioning Paper. The key research question this study addressed was: What are the various factors included in ‘risk-assessments’ by real estate agents in allocating ‘affordable’ tenancies? How are these risks quantified and managed? What are the key outcomes of their decision-making? The study builds on previous research demonstrating that a relatively large proportion of low-cost private rental accommodation is occupied by moderate- to high-income households (Wulff and Yates 2001; Seelig 2001; Yates et al. 2004). This is occurring in an environment where the private rental sector is now the de facto main provider of rental housing for lower-income households across Australia (Seelig et al. 2005) and where a number of factors are implicated in patterns of ‘income–rent mismatching’. These include ongoing shifts in public housing assistance; issues concerning eligibility for rent assistance; ‘supply’ factors, such as loss of low-cost rental stock through upgrading and/or transfer to owner-occupied housing; patterns of supply and demand driven largely by middle- to high-income owner-investors and renters; and patterns of housing need among low-income households for whom affordable housing is not appropriate. In formulating a way of approaching the analysis of ‘risk-assessment’ in rental housing management, this study has applied three sociological perspectives on risk: Beck’s (1992) formulation of risk society as entailing processes of ‘individualisation’; a socio-cultural perspective which emphasises the situated nature of perceptions of risk; and a perspective which has drawn attention to different modes of institutional governance of subjects, as ‘carriers of specific indicators of risk’. The private rental market was viewed as a social institution, and the research strategy was informed by ‘institutional ethnography’ as a method of enquiry. The study was based on interviews with property managers, real estate industry representatives, tenant advocates and community housing providers. The primary focus of inquiry was on ‘the moment of allocation’. Six local areas across metropolitan and regional Queensland, New South Wales, and South Australia were selected as case study localities. In terms of the main findings, it is evident that access to private rental housing is not just a matter of ‘supply and demand’. It is also about assessment of risk among applicants. Risk – perceived or actual – is thus a critical factor in deciding who gets housed, and how. Risk and its assessment matter in the context of housing provision and in the development of policy responses. The outcomes from this study also highlight a number of salient points: 1.There are two principal forms of risk associated with property management: financial risk and risk of litigation. 2. Certain tenant characteristics and/or circumstances – ability to pay and ability to care for the rented property – are the main factors focused on in assessing risk among applicants for rental housing. Signals of either ‘(in)ability to pay’ and/or ‘(in)ability to care for the property’ are almost always interpreted as markers of high levels of risk. 3. The processing of tenancy applications entails a complex and variable mix of formal and informal strategies of risk-assessment and allocation where sorting (out), ranking, discriminating and handing over characterise the process. 4. In the eyes of property managers, ‘suitable’ tenants can be conceptualised as those who are resourceful, reputable, competent, strategic and presentable. 5. Property managers clearly articulated concern about risks entailed in a number of characteristics or situations. Being on a low income was the principal and overarching factor which agents considered. Others included: - unemployment - ‘big’ families; sole parent families - domestic violence - marital breakdown - shift from home ownership to private rental - Aboriginality and specific ethnicities - physical incapacity - aspects of ‘presentation’. The financial vulnerability of applicants in these groups can be invoked, alongside expressed concerns about compromised capacities to manage income and/or ‘care for’ the property, as legitimate grounds for rejection or a lower ranking. 6. At the level of face-to-face interaction between the property manager and applicants, more intuitive assessments of risk based upon past experience or ‘gut feelings’ come into play. These judgements are interwoven with more systematic procedures of tenant selection. The findings suggest that considerable ‘risk’ is associated with low-income status, either directly or insofar as it is associated with other forms of perceived risk, and that such risks are likely to impede access to the professionally managed private rental market. Detailed analysis suggests that opportunities for access to housing by low-income householders also arise where, for example: - the ‘local experience’ of an agency and/or property manager works in favour of particular applicants - applicants can demonstrate available social support and financial guarantors - an applicant’s preference or need for longer-term rental is seen to provide a level of financial security for the landlord - applicants are prepared to agree to specific, more stringent conditions for inspection of properties and review of contracts - the particular circumstances and motivations of landlords lead them to consider a wider range of applicants - In particular circumstances, property managers are prepared to give special consideration to applicants who appear worthy, albeit ‘risky’. The strategic actions of demonstrating and documenting on the part of vulnerable (low-income) tenant applicants can improve their chances of being perceived as resourceful, capable and ‘savvy’. Such actions are significant because they help to persuade property managers not only that the applicant may have sufficient resources (personal and material) but that they accept that the onus is on themselves to show they are reputable, and that they have valued ‘competencies’ and understand ‘how the system works’. The parameters of the market do shape the processes of risk-assessment and, ultimately, the strategic relation of power between property manager and the tenant applicant. Low vacancy rates and limited supply of lower-cost rental stock, in all areas, mean that there are many more tenant applicants than available properties, creating a highly competitive environment for applicants. The fundamental problem of supply is an aspect of the market that severely limits the chances of access to appropriate and affordable housing for low-income rental housing applicants. There is recognition of the impact of this problem of supply. The study indicates three main directions for future focus in policy and program development: providing appropriate supports to tenants to access and sustain private rental housing, addressing issues of discrimination and privacy arising in the processes of selecting suitable tenants, and addressing problems of supply.
Resumo:
As the paper’s subtitle suggests broadband has had a remarkably checkered trajectory in Australia. It was synonymous with the early 1990s information superhighway and seemed to presage a moment in which “content is [to be] king”. It disappeared almost entirely as a public priority in the mid to late 1990s as intrastructure and content were disconnected in services frameworks focused on information and communication technologies. And it came back in the 2000s as a critical infrastructure for innovation and the knowledge economy. But this time content was not king but rather an intermediate input at the service of innovating industries and processes. Broadband was a critical infrastructure for the digitally-based creative industries. Today the quality of the broadband infrastructure in Australia—itself an outcome of these different policy frameworks—is identified as “fraudband” holding back business, creativity and consumer uptake. In this paper I use the checkered trajectory of broadband on Australian political and policy horizons as a stepping off point to reflect on the ideas governing these changing governmental and public settings. This history enables me to explore how content and infrastructure are simultaneously connected and disconnected in our thinking. And, finally, I want to make some remarks about the way communication, particularly media communication, has been marginally positioned after being, initially so apparently central.
Resumo:
This paper, first presented at a symposium on the 'past, present and future of cultural studies,' traces disciplinary changes in the study of culture from the perspective of 'cultural science', a term that was used by some of the earliest practitioners of cultural studies, including Raymond Williams. The paper goes on to describe some of the present moment, including work on the creative industries, show that a new version of cultural science is needed, based on evolutionary principles, in dialogue with the evolutionary approach in economics that was called for a century ago by Thorstein Veblen. This evolutionary turn, or 'cultural science 2.0,' it is argued, offers a radical and challenging future for cultural studies.
Resumo:
Until recently, the hot-rolled steel members have been recognized as the most popular and widely used steel group, but in recent times, the use of cold-formed high strength steel members has rapidly increased. However, the structural behavior of light gauge high strength cold-formed steel members characterized by various buckling modes is not yet fully understood. The current cold-formed steel sections such as C- and Z-sections are commonly used because of their simple forming procedures and easy connections, but they suffer from certain buckling modes. It is therefore important that these buckling modes are either delayed or eliminated to increase the ultimate capacity of these members. This research is therefore aimed at developing a new cold-formed steel beam with two torsionally rigid rectangular hollow flanges and a slender web formed using intermittent screw fastening to enhance the flexural capacity while maintaining a minimum fabrication cost. This thesis describes a detailed investigation into the structural behavior of this new Rectangular Hollow Flange Beam (RHFB), subjected to flexural action The first phase of this research included experimental investigations using thirty full scale lateral buckling tests and twenty two section moment capacity tests using specially designed test rigs to simulate the required loading and support conditions. A detailed description of the experimental methods, RHFB failure modes including local, lateral distortional and lateral torsional buckling modes, and moment capacity results is presented. A comparison of experimental results with the predictions from the current design rules and other design methods is also given. The second phase of this research involved a methodical and comprehensive investigation aimed at widening the scope of finite element analysis to investigate the buckling and ultimate failure behaviours of RHFBs subjected to flexural actions. Accurate finite element models simulating the physical conditions of both lateral buckling and section moment capacity tests were developed. Comparison of experimental and finite element analysis results showed that the buckling and ultimate failure behaviour of RHFBs can be simulated well using appropriate finite element models. Finite element models simulating ideal simply supported boundary conditions and a uniform moment loading were also developed in order to use in a detailed parametric study. The parametric study results were used to review the current design rules and to develop new design formulae for RHFBs subjected to local, lateral distortional and lateral torsional buckling effects. Finite element analysis results indicate that the discontinuity due to screw fastening has a noticeable influence only for members in the intermediate slenderness region. Investigations into different combinations of thicknesses in the flange and web indicate that increasing the flange thickness is more effective than web thickness in enhancing the flexural capacity of RHFBs. The current steel design standards, AS 4100 (1998) and AS/NZS 4600 (1996) are found sufficient to predict the section moment capacity of RHFBs. However, the results indicate that the AS/NZS 4600 is more accurate for slender sections whereas AS 4100 is more accurate for compact sections. The finite element analysis results further indicate that the current design rules given in AS/NZS 4600 is adequate in predicting the member moment capacity of RHFBs subject to lateral torsional buckling effects. However, they were inadequate in predicting the capacities of RHFBs subject to lateral distortional buckling effects. This thesis has therefore developed a new design formula to predict the lateral distortional buckling strength of RHFBs. Overall, this thesis has demonstrated that the innovative RHFB sections can perform well as economically and structurally efficient flexural members. Structural engineers and designers should make use of the new design rules and the validated existing design rules to design the most optimum RHFB sections depending on the type of applications. Intermittent screw fastening method has also been shown to be structurally adequate that also minimises the fabrication cost. Product manufacturers and builders should be able to make use of this in their applications.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
The buckling strength of a new cold-formed hollow flange channel section known as LiteSteel beam (LSB) is governed by lateral distortional buckling characterised by simultaneous lateral deflection, twist and web distortion for its intermediate spans. Recent research has developed a modified elastic lateral buckling moment equation to allow for lateral distortional buckling effects. However, it is limited to a uniform moment distribution condition that rarely exists in practice. Transverse loading introduces a non-uniform bending moment distribution, which is also often applied above or below the shear centre (load height). These loading conditions are known to have significant effects on the lateral buckling strength of beams. Many steel design codes have adopted equivalent uniform moment distribution and load height factors to allow for these effects. But they were derived mostly based on data for conventional hot-rolled, doubly symmetric I-beams subject to lateral torsional buckling. The moment distribution and load height effects of transverse loading for LSBs, and the suitability of the current design modification factors to accommodate these effects for LSBs is not known. This paper presents the details of a research study based on finite element analyses on the elastic lateral buckling strength of simply supported LSBs subject to transverse loading. It discusses the suitability of the current steel design code modification factors, and provides suitable recommendations for simply supported LSBs subject to transverse loading.
Resumo:
The new cold-formed LiteSteel beam (LSB) sections have found increasing popularity in residential, industrial and commercial buildings due to their lightweight and cost-effectiveness. They have the beneficial characteristics of including torsionally rigid rectangular flanges combined with economical fabrication processes. Currently there is significant interest in using LSB sections as flexural members in floor joist systems. When used as floor joists, the LSB sections require holes in the web to provide access for inspection and various services. But there are no design methods that provide accurate predictions of the moment capacities of LSBs with web holes. In this study, the buckling and ultimate strength behaviour of LSB flexural members with web holes was investigated in detail by using a detailed parametric study based on finite element analyses with an aim to develop appropriate design rules and recommendations for the safe design of LSB floor joists. Moment capacity curves were obtained using finite element analyses including all the significant behavioural effects affecting their ultimate member capacity. The parametric study produced the required moment capacity curves of LSB section with a range of web hole combinations and spans. A suitable design method for predicting the ultimate moment capacity of LSB with web holes was finally developed. This paper presents the details of this investigation and the results
Resumo:
Unresolved painful emotional experiences such as bereavement, trauma and disturbances in core relationships, are common presenting problems for clients of psychodrama or psychotherapy more generally. Emotional pain is experienced as a shattering of the sense of self and disconnection from others and, when unresolved, produces avoidant responses which inhibit the healing process. There is agreement across therapeutic modalities that exposure to emotional experience can increase the efficacy of therapeutic interventions. Moreno proposes that the activation of spontaneity is the primary curative factor in psychodrama and that healing occurs when the protagonist (client) engages with his or her wider social system and develops greater flexibility in response to that system. An extensive case-report literature describes the application of the psychodrama method in healing unresolved painful emotional experiences, but there is limited empirical research to verify the efficacy of the method or to identify the processes that are linked to therapeutic change. The purpose of this current research was to construct a model of protagonist change processes that could extend psychodrama theory, inform practitioners’ therapeutic decisions and contribute to understanding the common factors in therapeutic change. Four studies investigated protagonist processes linked to in-session resolution of painful emotional experiences. Significant therapeutic events were analysed using recordings and transcripts of psychodrama enactments, protagonist and director recall interviews and a range of process and outcome measures. A preliminary study (3 cases) identified four themes that were associated with helpful therapeutic events: enactment, the working alliance with the director and with group members, emotional release or relief and social atom repair. The second study (7 cases) used Comprehensive Process Analysis (CPA) to construct a model of protagonists’ processes linked to in-session resolution. This model was then validated across four more cases in Study 3. Five meta-processes were identified: (i) a readiness to engage in the psychodrama process; (ii) re-experiencing and insight; (iii) activating resourcefulness; (iv) social atom repair with emotional release and (v) integration. Social atom repair with emotional release involved deeply experiencing a wished-for interpersonal experience accompanied by a free flowing release of previously restricted emotion and was most clearly linked to protagonists’ reports of reaching resolution and to post session improvements in interpersonal relationships and sense of self. Acceptance of self in the moment increased protagonists’ capacity to generate new responses within each meta-process and, in resolved cases, there was evidence of spontaneity developing over time. The fourth study tested Greenberg’s allowing and accepting painful emotional experience model as an alternative explanation of protagonist change. The findings of this study suggested that while the process of allowing emotional pain was present in resolved cases, Greenberg’s model was not sufficient to explain the processes that lead to in-session resolution. The protagonist’s readiness to engage and activation of resourcefulness appear to facilitate the transition from problem identification to emotional release. Furthermore, experiencing a reparative relationship was found to be central to the healing process. This research verifies that there can be in-session resolution of painful emotional experience during psychodrama and protagonists’ reports suggest that in-session resolution can heal the damage to the sense of self and the interpersonal disconnection that are associated with unresolved emotional pain. A model of protagonist change processes has been constructed that challenges the view of psychodrama as a primarily cathartic therapy, by locating the therapeutic experience of emotional release within the development of new role relationships. The five meta-processes which are described within the model suggest broad change principles which can assist practitioners to make sense of events as they unfold and guide their clinical decision making in the moment. Each meta-process was linked to specific post-session changes, so that the model can inform the development of therapeutic plans for individual clients and can aid communication for practitioners when a psychodrama intervention is used for a specific therapeutic purpose within a comprehensive program of therapy.