940 resultados para Scope
Resumo:
The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
Resumo:
The notions of identity and teacher education have attracted considerable research over the years, revealing a strong correlation between teacher beliefs and practices and the resultant impact on pedagogical practices in the classroom. In an era where the use of digital technologies should be synonymous with teacher pedagogical practices and transforming education, there is a growing need for pre-service teachers to develop an identity that resonates with pedagogical practices that engage and connect with students in a positive and productive way. With many educational institutions also mandating that educators use digital technologies as a tool to support and enhance teaching, pre-service teacher education needs to ensure that students understand and develop a positive identity within this digital world. Current literature acknowledges that many educators adopt digital technologies in the classroom without sometimes fully understanding its scope or impact. It is within this context that this paper reports on a three-year study of first year pre-service education students and their understanding of identity in a digital world. More specifically, the study identifies how students currently use social and digital media in their personal and professional lives to identify themselves online in order to promote a positive image. The study also seeks to identify how these technologies and an understanding of identity can be utilised to promote a positive first year experience.
Resumo:
Background The size and flexibility of the nursing workforce has positioned nursing as central to the goals of health service improvement. Nursing's response to meeting these goals has resulted in proliferation of advanced practice nursing with a confusing array of practice profiles, titles and roles. Whilst numerous models and definitions of advanced practice nursing have been developed there is scant published research of significant scope that supports these models. Consequently there is an ongoing call in the literature for clarity and stability in nomenclature, and confusion in the health industry on how to optimise the utility of advanced practice nursing. Objectives To identify and delineate advanced practice from other levels of nursing practice through examination of a national nursing workforce. Design A cross-sectional electronic survey of nurses using the validated Advanced Practice Role Delineation tool based on the Strong Model of Advanced Practice. Participants Study participants were registered nurses employed in a clinical service environment across all states and territories of Australia. Methods A sample of 5662 registered nurses participated in the study. Domain means for each participant were calculated then means for nursing position titles were calculated. Position titles were grouped by delineation and were compared with one-way analysis of variance on domain means. The alpha for all tests was set at 0.05. Significant effects were examined with Scheffe post hoc comparisons to control for Type 1 error. Results The survey tool was able to identify position titles where nurses were practicing at an advanced level and to delineate this cohort from other levels of nursing practice, including nurse practitioner. The results show that nurses who practice at an advanced level are characterised by high mean scores across all Domains of the Strong Model of Advanced Practice. The mean scores of advanced practice nurses were significantly different from nurse practitioners in the Direct Care Domain and significantly different from other levels of nurse across all domains. Conclusions The study results show that the nurse practitioner, advanced practice nurse and foundation level registered nurse have different patterns of practice and the Advanced Practice Role Delineation tool has the capacity to clearly delineate and define advanced practice nursing. These findings make a significant contribution to the international debate and show that the profession can now identify what is and what is not advanced practice in nursing.
Resumo:
Many countries over the last decade, have used performance-based contracting (PBC) to manage and maintain roads. The implementation of PBC provides additional benefits for the government/public such as cost savings and improved conditions of contracted road assets. In Australia, PBC is already being implemented on all categories of roads: national, state, urban and rural. Australian PBC arrangement is designed to turn over control and responsibility for roadway system maintenance, rehabilitation, and capital improvement projects to private contractors. Contractors’ responsibilities include determination of treatment types, the design, programming and the undertaking of works needed to maintain road networks at predetermined performance levels. Indonesia initiated two PBC pilot projects in 2011, the Pantura Section Demak-Trengguli (7.68 kilometers) in Central Java Province and Section Ciasem-Pamanukan (18.5 kilometers) in West Java Province. Both sections are categorized as national roads. The contract duration for both of these projects is four years. To facilitate a possible way forward, it is proposed to conduct a study to understand Australia's experiences of advancing from pilot projects to nation-wide programs using PBC. The study focuses on the scope of contracts, bidding processes, risk allocation, and key drivers, using relevant PBC case studies from Australia. Recommendations for future PBC deployment nation-wide should be based on more research associated with risk allocation. This will include investigation of standard conditions of contract. Implications of the contract clauses for the risk management strategy to be adopted by contractors. Based on the nature of risks, some are best managed by the project owner. It is very important that all parties involved to be open to the new rules of contract and to convince themselves about the potential increased benefits of the use of PBC. The most recent states of challenging issues were explored and described.
Resumo:
Sustainable management of native pastures requires an understanding of what the bounds of pasture composition, cover and soil surface condition are for healthy pastoral landscapes to persist. A survey of 107 Aristida/Bothriochloa pasture sites in inland central Queensland was conducted. The sites were chosen for their current diversity of tree cover, apparent pasture condition and soil type to assist in setting more objective bounds on condition ‘states’ in such pastures. Assessors’ estimates of pasture condition were strongly correlated with herbage mass (r = 0.57) and projected ground cover (r = 0. 58), and moderately correlated with pasture crown cover (r = 0.35) and tree basal area (r = 0.32). Pasture condition was not correlated with pasture plant density or the frequency of simple guilds of pasture species. The soil type of Aristida/Bothriochloa pasture communities was generally hard-setting, low in cryptogam cover but moderately covered with litter and projected ground cover (30–50%). There was no correlation between projected ground cover of pasture and estimated ground-level cover of plant crowns. Tree basal area was correlated with broad categories of soil type, probably because greater tree clearing has occurred on the more fertile, heavy-textured clay soils. Of the main perennial grasses, some showed strong soil preferences, for example Tripogon loliiformis for hard-setting soils and Dichanthium sericeum for clays. Common species, such as Chrysopogon fallax and Heteropogon contortus, had no strong soil preference. Wiregrasses (Aristida spp.) tended to be uncommon at both ends of the estimated pasture condition scale whereas H. contortus was far more common in pastures in good condition. Sedges (Cyperaceae) were common on all soil types and for all pasture condition ratings. Plants identified as increaser species were Tragus australianus, daisies (Asteraceae) and potentially toxic herbaceous legumes such as Indigofera spp. and Crotalaria spp. Pasture condition could not be reliably predicted based on the abundance of a single species or taxon but there may be scope for using integrated data for four to five ecologically contrasting plants such as Themeda triandra with daisies, T. loliiformis and flannel weeds (Malvaceae).
Resumo:
The financial health of beef cattle enterprises in northern Australia has declined markedly over the last decade due to an escalation in production and marketing costs and a real decline in beef prices. Historically, gains in animal productivity have offset the effect of declining terms of trade on farm incomes. This raises the question of whether future productivity improvements can remain a key path for lifting enterprise profitability sufficient to ensure that the industry remains economically viable over the longer term. The key objective of this study was to assess the production and financial implications for north Australian beef enterprises of a range of technology interventions (development scenarios), including genetic gain in cattle, nutrient supplementation, and alteration of the feed base through introduced pastures and forage crops, across a variety of natural environments. To achieve this objective a beef systems model was developed that is capable of simulating livestock production at the enterprise level, including reproduction, growth and mortality, based on energy and protein supply from natural C4 pastures that are subject to high inter-annual climate variability. Comparisons between simulation outputs and enterprise performance data in three case study regions suggested that the simulation model (the Northern Australia Beef Systems Analyser) can adequately represent the performance beef cattle enterprises in northern Australia. Testing of a range of development scenarios suggested that the application of individual technologies can substantially lift productivity and profitability, especially where the entire feedbase was altered through legume augmentation. The simultaneous implementation of multiple technologies that provide benefits to different aspects of animal productivity resulted in the greatest increases in cattle productivity and enterprise profitability, with projected weaning rates increasing by 25%, liveweight gain by 40% and net profit by 150% above current baseline levels, although gains of this magnitude might not necessarily be realised in practice. While there were slight increases in total methane output from these development scenarios, the methane emissions per kg of beef produced were reduced by 20% in scenarios with higher productivity gain. Combinations of technologies or innovative practices applied in a systematic and integrated fashion thus offer scope for providing the productivity and profitability gains necessary to maintain viable beef enterprises in northern Australia into the future.
Resumo:
Divergent genetic selection for wool growth as a single trait has led to major changes in sheep physiology and metabolism, including variations in rumen microbial protein production and uptake of α-amino nitrogen in portal blood. This study was conducted to determine if sheep with different genetic merit for wool growth exhibit distinct rumen bacterial diversity. Eighteen Merino wethers were separated into groups of contrasting genetic merit for clean fleece weight (CFW; low: WG− and high: WG+) and fed a blend of oaten and lucerne chaff diet at two levels of intake (LOI; 1 or 1.5 times maintenance energy requirements) for two seven-week periods in a crossover design. Bacterial diversity in rumen fluid collected by esophageal intubation was characterized using 454 amplicon pyrosequencing of the V3/V4 regions of the 16S rRNA gene. Bacterial diversity estimated by Phylogenetic distance, Chao1 and observed species did not differ significantly with CFW or LOI; however, the Shannon diversity index differed (P=0.04) between WG+ (7.67) and WG− sheep (8.02). WG+ animals had a higher (P=0.03) proportion of Bacteroidetes (71.9% vs 66.5%) and a lower (P=0.04) proportion of Firmicutes (26.6% vs 31.6%) than WG− animals. Twenty-four specific operational taxonomic units (OTUs), belonging to the Firmicutes and Bacteroidetes phyla, were shared among all the samples, whereas specific OTUs varied significantly in presence/abundance (P<0.05) between wool genotypes and 50 varied (P<0.05) with LOI. It appears that genetic selection for fleece weight is associated with differences in rumen bacterial diversity that persist across different feeding levels. Moderate correlations between seven continuous traits, such as methane production or microbial protein production, and the presence and abundance of 17 OTUs were found, indicating scope for targeted modification of the microbiome to improve the energetic efficiency of rumen microbial synthesis and reduce the greenhouse gas footprint of ruminants.
Resumo:
Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.
Resumo:
The ever expanding growth of the wireless access to the Internet in recent years has led to the proliferation of wireless and mobile devices to connect to the Internet. This has created the possibility of mobile devices equipped with multiple radio interfaces to connect to the Internet using any of several wireless access network technologies such as GPRS, WLAN and WiMAX in order to get the connectivity best suited for the application. These access networks are highly heterogeneous and they vary widely in their characteristics such as bandwidth, propagation delay and geographical coverage. The mechanism by which a mobile device switches between these access networks during an ongoing connection is referred to as vertical handoff and it often results in an abrupt and significant change in the access link characteristics. The most common Internet applications such as Web browsing and e-mail make use of the Transmission Control Protocol (TCP) as their transport protocol and the behaviour of TCP depends on the end-to-end path characteristics such as bandwidth and round-trip time (RTT). As the wireless access link is most likely the bottleneck of a TCP end-to-end path, the abrupt changes in the link characteristics due to a vertical handoff may affect TCP behaviour adversely degrading the performance of the application. The focus of this thesis is to study the effect of a vertical handoff on TCP behaviour and to propose algorithms that improve the handoff behaviour of TCP using cross-layer information about the changes in the access link characteristics. We begin this study by identifying the various problems of TCP due to a vertical handoff based on extensive simulation experiments. We use this study as a basis to develop cross-layer assisted TCP algorithms in handoff scenarios involving GPRS and WLAN access networks. We then extend the scope of the study by developing cross-layer assisted TCP algorithms in a broader context applicable to a wide range of bandwidth and delay changes during a handoff. And finally, the algorithms developed here are shown to be easily extendable to the multiple-TCP flow scenario. We evaluate the proposed algorithms by comparison with standard TCP (TCP SACK) and show that the proposed algorithms are effective in improving TCP behavior in vertical handoff involving a wide range of bandwidth and delay of the access networks. Our algorithms are easy to implement in real systems and they involve modifications to the TCP sender algorithm only. The proposed algorithms are conservative in nature and they do not adversely affect the performance of TCP in the absence of cross-layer information.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
This thesis presents a highly sensitive genome wide search method for recessive mutations. The method is suitable for distantly related samples that are divided into phenotype positives and negatives. High throughput genotype arrays are used to identify and compare homozygous regions between the cohorts. The method is demonstrated by comparing colorectal cancer patients against unaffected references. The objective is to find homozygous regions and alleles that are more common in cancer patients. We have designed and implemented software tools to automate the data analysis from genotypes to lists of candidate genes and to their properties. The programs have been designed in respect to a pipeline architecture that allows their integration to other programs such as biological databases and copy number analysis tools. The integration of the tools is crucial as the genome wide analysis of the cohort differences produces many candidate regions not related to the studied phenotype. CohortComparator is a genotype comparison tool that detects homozygous regions and compares their loci and allele constitutions between two sets of samples. The data is visualised in chromosome specific graphs illustrating the homozygous regions and alleles of each sample. The genomic regions that may harbour recessive mutations are emphasised with different colours and a scoring scheme is given for these regions. The detection of homozygous regions, cohort comparisons and result annotations are all subjected to presumptions many of which have been parameterized in our programs. The effect of these parameters and the suitable scope of the methods have been evaluated. Samples with different resolutions can be balanced with the genotype estimates of their haplotypes and they can be used within the same study.
Resumo:
Marketing of goods under geographical names has always been common. Aims to prevent abuse have given rise to separate forms of legal protection for geographical indications (GIs) both nationally and internationally. The European Community (EC) has also gradually enacted its own legal regime to protect geographical indications. The legal protection of GIs has traditionally been based on the idea that geographical origin endows a product exclusive qualities and characteristics. In today s world we are able to replicate almost any prod-uct anywhere, including its qualities and characteristics. One would think that this would preclude protec-tion from most geographical names, yet the number of geographical indications seems to be rising. GIs are no longer what they used to be. In the EC it is no longer required that a product is endowed exclusive characteristics by its geographical origin as long as consumers associate the product with a certain geo-graphical origin. This departure from the traditional protection of GIs is based on the premise that a geographical name extends beyond and exists apart from the product and therefore deserves protection itself. The thesis tries to clearly articulate the underlying reasons, justifications, principles and policies behind the protection of GIs in the EC and then scrutinise the scope and shape of the GI system in the light of its own justifications. The essential questions it attempts to aswer are (1) What is the basis and criteria for granting GI rights? (2) What is the scope of protection afforded to GIs? and (3) Are these both justified in the light of the functions and policies underlying granting and protecting of GIs? Despite the differences, the actual functions of GIs are in many ways identical to those of trade marks. Geographical indications have a limited role as source and quality indicators in allowing consumers to make informed and efficient choices in the market place. In the EC this role is undermined by allowing able room and discretion for uses that are arbitrary. Nevertheless, generic GIs are unable to play this role. The traditional basis for justifying legal protection seems implausible in most case. Qualities and charac-teristics are more likely to be related to transportable skill and manufacturing methods than the actual geographical location of production. Geographical indications are also incapable of protecting culture from market-induced changes. Protection against genericness, against any misuse, imitation and evocation as well as against exploiting the reputation of a GI seem to be there to protect the GI itself. Expanding or strengthening the already existing GI protection or using it to protect generic GIs cannot be justified with arguments on terroir or culture. The conclusion of the writer is that GIs themselves merit protection only in extremely rare cases and usually only the source and origin function of GIs should be protected. The approach should not be any different from one taken in trade mark law. GI protection should not be used as a means to mo-nopolise names. At the end of the day, the scope of GI protection is nevertheless a policy issue.
Resumo:
In terms of critical discourse, Liberty contributes to the ongoing aesthetic debate on ‘the sublime.’ Philosopher Immanuel Kant (1724–1804) defined the sublime as a failure of rationality in response to sensory overload: a state where the imagination is suspended, without definitive reference points—a state beyond unequivocal ‘knowing.’ I believe the events of September 11, 2001 eluded our understanding in much the same way, leaving us in a moment of suspension between awe and horror. It was an event that couldn’t be understood in terms of scope or scale. It was a moment of overload, which is so difficult to capture in art. With my work I attempt to rekindle that moment of suspension. Like the events of 9/11, Liberty defies definition. Its form is constantly changing; it is always presenting us with new layers of meaning. Nobody quite had a handle on the events that followed 9/11, because the implications were constantly shifting. In the same way, Liberty cannot be contained or defined at any moment in time. Like the events of 9/11, the full story cannot be told in a snapshot. One of the dictionary definitions for the word ‘sublime’ is the conversion of ‘a solid substance directly into a gas, without there being an intermediate liquid phase’. With this in mind, I would like to present Liberty as a work that is literally ‘sublime.’ But what’s really interesting to me about Liberty is that it presents the sublime on all levels: in its medium, in its subject matter (that moment of suspension), and in its formal (formless) presentation. On every level Liberty is sublime—subverting all tangible reference points and eluding capture entirely. Liberty is based on the Statue of Liberty in New York. However, unlike that statue which has stood in New York since 1886 and can be reasonably expected to stand for millennia, this work takes on diminishing proportions, carved as it is in carbon dioxide, a mysterious, previously unexplored medium—one which smokes, snows and dramatically vanishes into a harmless gas. Like the material this work is carved from, the civil liberties of the free world are diminishing fast, since 9/11 and before. This was my thought when I first conceived this work. Now it’s become evident that Liberty expresses a lot more than just this: it demonstrates the erosion of civil liberties, yes. However, it also presents the intangible, indefinable moments in the days and months that followed 9/11. The sculptural work will last for only a short time, and thereafter will exist only in documentation. During this time, the form is continually changing and self-refining, until it disappears entirely, to be inhaled, metabolised and literally taken to heart by viewers.
Resumo:
The aim of this research is to present, interpret and analyze the phenomenon of pilgrimage in a contemporary, suburban Greek nunnery, and to elucidate the different functions that the present-day convent has for its pilgrims. The scope of the study is limited to a case nunnery, the convent of the Dormition of the Virgin, which is situated in Northern Greece. The main corpus of data utilized for this work consists of 25 interviews and field diary material, which was collected in the convent mainly during the academic year 2002-2003 and summer 2005 by means of participant observation and unstructured thematic interviewing. It must be noted that most Greek nunneries are not really communities of hermits but institutions that operate in complex interaction with the surrounding society. Thus, the main interest in this study is in the interaction between pilgrims and nuns. Pilgrimage is seen here as a significant and concrete form of interaction, which in fact makes the contemporary nunneries dynamic scenes of religious, social and sometimes even political life. The focus of the analysis is on the pilgrims’ experiences, reflected upon on the levels of the individual, the Church institution, and society in general. This study shows that pilgrimage in a suburban nunnery, such as the convent of the Dormition, can be seen as part of everyday religiosity. Many pilgrims visit the convent regularly and the visitation is a lifestyle the pilgrims have chosen and wish to maintain. Pilgrimage to a contemporary Greek nunnery should not be ennobled, but seen as part of a popular religious sentiment. The visits offer pilgrims various tools for reflecting on their personal life situations and on questions of identity. For them the full round of liturgical worship is a very good reason for going to the convent, and many see it as a way of maintaining their faith and of feeling close to God. Despite cultural developments such as secularization and globalization, pilgrims are quite loyal to the convent they visit. It represents the positive values of ‘Greekness’ and therefore they also trust the nuns’ approach to various matters, both personal and political. The coalition of Orthodoxy and nationalism is also visible in their attitudes towards the convent, which they see as a guardian of Hellenism and as nurturing Greek values both now and in the future.