3 resultados para Constraints-led approach

em Glasgow Theses Service


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Community participation has become an integral part of many areas of public policy over the last two decades. For a variety of reasons, ranging from concerns about social cohesion and unrest to perceived failings in public services, governments in the UK and elsewhere have turned to communities as both a site of intervention and a potential solution. In contemporary policy, the shift to community is exemplified by the UK Government’s Big Society/Localism agenda and the Scottish Government’s emphasis on Community Empowerment. Through such policies, communities have been increasingly encouraged to help themselves in various ways, to work with public agencies in reshaping services, and to become more engaged in the democratic process. These developments have led some theorists to argue that responsibilities are being shifted from the state onto communities, representing a new form of 'government through community' (Rose, 1996; Imrie and Raco, 2003). Despite this policy development, there is surprisingly little evidence which demonstrates the outcomes of the different forms of community participation. This study attempts to address this gap in two ways. Firstly, it explores the ways in which community participation policy in Scotland and England are playing out in practice. And secondly, it assesses the outcomes of different forms of community participation taking place within these broad policy contexts. Methodology: The study employs an innovative combination of the two main theory-based evaluation methodologies, Theories of Change (ToC) and Realist Evaluation (RE), building on ideas generated by earlier applications of each approach (Blamey and Mackenzie, 2007). ToC methodology is used to analyse the national policy frameworks and the general approach of community organisations in six case studies, three in Scotland and three in England. The local evidence from the community organisations’ theories of change is then used to analyse and critique the assumptions which underlie the Localism and Community Empowerment policies. Alongside this, across the six case studies, a RE approach is utilised to examine the specific mechanisms which operate to deliver outcomes from community participation processes, and to explore the contextual factors which influence their operation. Given the innovative methodological approach, the study also engages in some focused reflection on the practicality and usefulness of combining ToC and RE approaches. Findings: The case studies provide significant evidence of the outcomes that community organisations can deliver through directly providing services or facilities, and through influencing public services. Important contextual factors in both countries include particular strengths within communities and positive relationships with at least part of the local state, although this often exists in parallel with elements of conflict. Notably this evidence suggests that the idea of responsibilisation needs to be examined in a more nuanced fashion, incorporating issues of risk and power, as well the active agency of communities and the local state. Thus communities may sometimes willingly take on responsibility in return for power, although this may also engender significant risk, with the balance between these three elements being significantly mediated by local government. The evidence also highlights the impacts of austerity on community participation, with cuts to local government budgets in particular increasing the degree of risk and responsibility for communities and reducing opportunities for power. Furthermore, the case studies demonstrate the importance of inequalities within and between communities, operating through a socio-economic gradient in community capacity. This has the potential to make community participation policy regressive as more affluent communities are more able to take advantage of additional powers and local authorities have less resource to support the capacity of more disadvantaged communities. For Localism in particular, the findings suggest that some of the ‘new community rights’ may provide opportunities for communities to gain power and generate positive social outcomes. However, the English case studies also highlight the substantial risks involved and the extent to which such opportunities are being undermined by austerity. The case studies suggest that cuts to local government budgets have the potential to undermine some aspects of Localism almost entirely, and that the very limited interest in inequalities means that Localism may be both ‘empowering the powerful’ (Hastings and Matthews, 2014) and further disempowering the powerless. For Community Empowerment, the study demonstrates the ways in which community organisations can gain power and deliver positive social outcomes within the broad policy framework. However, whilst Community Empowerment is ostensibly less regressive, there are still significant challenges to be addressed. In particular, the case studies highlight significant constraints on the notion that communities can ‘choose their own level of empowerment’, and the assumption of partnership working between communities and the local state needs to take into account the evidence of very mixed relationships in practice. Most importantly, whilst austerity has had more limited impacts on local government in Scotland so far, the projected cuts in this area may leave Community Empowerment vulnerable to the dangers of regressive impact highlighted for Localism. Methodologically, the study shows that ToC and RE can be practically applied together and that there may be significant benefits of the combination. ToC offers a productive framework for policy analysis and combining this with data derived from local ToCs provides a powerful lens through which to examine and critique the aims and assumptions of national policy. ToC models also provide a useful framework within which to identify specific causal mechanisms, using RE methodology and, again, the data from local ToC work can enable significant learning about ‘what works for whom in what circumstances’ (Pawson and Tilley, 1997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Re-creating and understanding the origin of life represents one of the major challenges facing the scientific community. We will never know exactly how life started on planet Earth, however, we can reconstruct the most likely chemical pathways that could have contributed to the formation of the first living systems. Traditionally, prebiotic chemistry has investigated the formation of modern life’s precursors and their self-organisation under very specific conditions thought to be ‘plausible’. So far, this approach has failed to produce a living system from the bottom-up. In the work presented herein, two different approaches are employed to explore the transition from inanimate to living matter. The development of microfluidic technology during the last decades has changed the way traditional chemical and biological experiments are performed. Microfluidics allows the handling of low volumes of reagents with very precise control. The use of micro-droplets generated within microfluidic devices is of particular interest to the field of Origins of Life and Artificial Life. Whilst many efforts have been made aiming to construct cell-like compartments from modern biological constituents, these are usually very difficult to handle. However, microdroplets can be easily generated and manipulated at kHz rates, making it suitable for high-throughput experimentation and analysis of compartmentalised chemical reactions. Therefore, we decided to develop a microfluidic device capable of manipulating microdroplets in such a way that they could be efficiently mixed, split and sorted within iterative cycles. Since no microfluidic technology had been developed before in the Cronin Group, the first chapter of this thesis describes the soft lithographic methods and techniques developed to fabricate microfluidic devices. Also, special attention is placed on the generation of water-in-oil microdroplets, and the subsequent modules required for the manipulation of the droplets such as: droplet fusers, splitters, sorters and single/multi-layer micromechanical valves. Whilst the first part of this thesis describes the development of a microfluidic platform to assist chemical evolution, finding a compatible set of chemical building blocks capable of reacting to form complex molecules with endowed replicating or catalytic activity was challenging. Abstract 10 Hence, the second part of this thesis focuses on potential chemistry that will ultimately possess the properties mentioned above. A special focus is placed on the formation of peptide bonds from unactivated amino acids, despite being one of the greatest challenges in prebiotic chemistry. As opposed to classic prebiotic experiments, in which a specific set of conditions is studied to fit a particular hypothesis, we took a different approach: we explored the effects of several parameters at once on a model polymerisation reaction, without constraints on hypotheses on the nature of optimum conditions or plausibility. This was facilitated by development of a new high-throughput automated platform, allowing the exploration of a much larger number of parameters. This led us to discover that peptide bond formation is less challenging than previously imagined. Having established the right set of conditions under which peptide bond formation was enhanced, we then explored the co-oligomerisation between different amino acids, aiming for the formation of heteropeptides with different structure or function. Finally, we studied the effect of various environmental conditions (rate of evaporation, presence of salts or minerals) in the final product distribution of our oligomeric products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.