983 resultados para HYDROTHERMAL PROCESS
Resumo:
Phase diagrams for Tm2O3-H2O-CO2. Yb2O3-H2O-CO2 and Lu2O3-H2O-CO2 systems at 650 and 1300 bars have been investigated in the temperature range of 100–800°C. The phase diagrams are far more complex than those for the lighter lanthanides. The stable phases are Ln(OH)3, Ln2(CO3)3.3H2O (tengerite phase), orthorhombic-LnOHCO3, hexagonal-Ln2O2CO3. LnOOH and cubic-Ln2O3. Ln(OH)3 is stable only at very low partial pressures of CO2. Additional phases stabilised are Ln2O(OH)2CO3and Ln6(OH)4(CO3)7 which are absent in lighter lanthanide systems. Other phases, isolated in the presence of minor alkali impurities, are Ln6O2(OH)8(CO3)3. Ln4(OH)6(CO3)3 and Ln12O7(OH)10,(CO3)6. The chemical equilibria prevailing in these hydrothermal systems may be best explained on the basis of the four-fold classification of lanthanides.
Resumo:
A year before Kate Nesbitt’s Theorising a New Agenda For Architecture (1996), the author penned a chapter on the significance of the sublime and its contribution to post-modern architecture via the uncanny or disturbing through the theories of Vidler and Eisenman (Nesbit, 1995). Twenty years on, we see its ongoing presence within the contemporary works of artists Kapoor, Ellison and Viola. Eisenmann and Libeskind aside, explicit reference to the Sublime whether through architectural praxis or theory appears to have been trumped by ecological derivatives and associated transactions, as catalyst for new architecture and architectural thinking. For Edmund Burke (1757), the Sublime was seen as a leading, an overpowering of self to a state of intense self-presence, often leading to a state of otherness. To experience the sublime is to experience affect, physiologically overwhelming the mental faculties through intensities of astonishment, terror, obscurity, magnificence, and reverence. Key here is Burke’s articulation of the stages of the sublime encounter, particularly so, its implications for the process of production which architectural theorists appear to have overstepped in their valorisation of the sublime object. This paper seeks to resituate the sublime within the context of architectural production. Through concepts such as material thinking, bodies and making strange, the paper explores a shift in focus toward affective processes traced from Burke’s inquiry. Rather than proposing strategies solely for affect within the work itself, the focus lies upon the designing experience, where blockage and desirous forces are critical partners in the process of production, as revealed through recent studio programs entitled Strange Space.
Resumo:
The perturbation treatment previously given is extended to explain the process of hydrogen abstraction from the various hydrogen donor molecules by the triplet nπ* state of ketones or the ground state of the alkyl or alkoxy radical. The results suggest that, as the ionization energy of the donor bonds is decreased, the reaction is accelerated and it is not influenced by the bond strength of the donor bonds. The activation barrier in such reactions arises from a weakening of the charge resonance term as the ionization energy of the donor bond increases.
Resumo:
The research reported in this thesis dealt with single crystals of thallium bromide grown for gamma-ray detector applications. The crystals were used to fabricate room temperature gamma-ray detectors. Routinely produced TlBr detectors often are poor quality. Therefore, this study concentrated on developing the manufacturing processes for TlBr detectors and methods of characterisation that can be used for optimisation of TlBr purity and crystal quality. The processes under concern were TlBr raw material purification, crystal growth, annealing and detector fabrication. The study focused on single crystals of TlBr grown from material purified by a hydrothermal recrystallisation method. In addition, hydrothermal conditions for synthesis, recrystallisation, crystal growth and annealing of TlBr crystals were examined. The final manufacturing process presented in this thesis deals with TlBr material purified by the Bridgman method. Then, material is hydrothermally recrystallised in pure water. A travelling molten zone (TMZ) method is used for additional purification of the recrystallised product and then for the final crystal growth. Subsequent processing is similar to that described in the literature. In this thesis, literature on improving quality of TlBr material/crystal and detector performance is reviewed. Aging aspects as well as the influence of different factors (temperature, time, electrode material and so on) on detector stability are considered and examined. The results of the process development are summarised and discussed. This thesis shows the considerable improvement in the charge carrier properties of a detector due to additional purification by hydrothermal recrystallisation. As an example, a thick (4 mm) TlBr detector produced by the process was fabricated and found to operate successfully in gamma-ray detection, confirming the validity of the proposed purification and technological steps. However, for the complete improvement of detector performance, further developments in crystal growth are required. The detector manufacturing process was optimized by characterisation of material and crystals using methods such as X-ray diffraction (XRD), polarisation microscopy, high-resolution inductively coupled plasma mass (HR-ICPM), Fourier transform infrared (FTIR), ultraviolet and visual (UV-Vis) spectroscopy, field emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDS), current-voltage (I-V) and capacity voltage (CV) characterisation, and photoconductivity, as well direct detector examination.
Resumo:
The present study examined whether a specific property of cell microstructures may be useful as a biomarker of aging. Specifically, the association between age and changes of cellular structures reflected in electrophoretic mobility of cell nuclei index (EMN index) values across the adult lifespan was examined. This report considers findings from cross sections of females (n = 1273) aged 18–98 years, and males (n = 506) aged 19–93 years. A Biotest apparatus was used to perform intracellular microelectrophoresis on buccal epithelial cells collected from each individual. EMN index was calculated on the basis of the number of epithelial cells with mobile nuclei in reference to the cells with immobile nuclei per 100 cells. Regression analyses indicated a significant negative association between EMN index value and age for men (r = −0.71, p < 0.001) and women (r = −0.60, p < 0.001); demonstrating a key requirement that must be met by a biomarker of aging. The strength of association observed between EMN index and age for both men and women was encouraging and supports the potential use of EMN index for determining a biological age of an individual (or a group). In this study, a new attempt of complex explanation of cellular mechanisms contributing to age related changes of the EMN index was made. In this study, a new attempt of complex explanation of cellular mechanisms contributing to age related changes of the EMN index was made. EMN index has demonstrated potential to meet criteria proposed for biomarkers of aging and further investigations are necessary.
Resumo:
- Objectives Falls are the most frequent adverse event reported in hospitals. Patient and staff education delivered by trained educators significantly reduced falls and injurious falls in an older rehabilitation population. The purpose of the study was to explore the educators’ perspectives of delivering the education and to conceptualise how the programme worked to prevent falls among older patients who received the education. - Design A qualitative exploratory study. - Methods Data were gathered from three sources: conducting a focus group and an interview (n=10 educators), written educator notes and reflective researcher field notes based on interactions with the educators during the primary study. The educators delivered the programme on eight rehabilitation wards for periods of between 10 and 40 weeks. They provided older patients with individualised education to engage in falls prevention and provided staff with education to support patient actions. Data were thematically analysed and presented using a conceptual framework. - Results Falls prevention education led to mutual understanding between staff and patients which assisted patients to engage in falls prevention behaviours. Mutual understanding was derived from the following observations: the educators perceived that they could facilitate an effective three-way interaction between staff actions, patient actions and the ward environment which led to behaviour change on the wards. This included engaging with staff and patients, and assisting them to reconcile differing perspectives about falls prevention behaviours. - Conclusions Individualised falls prevention education effectively provides patients who receive it with the capability and motivation to develop and undertake behavioural strategies that reduce their falls, if supported by staff and the ward environment.
Resumo:
Business Process Management (BPM) as a research field integrates different perspectives from the disciplines computer science, management science and information systems research. Its evolution has by been shaped by the corresponding conferences series, the International Conference on Business Process Management (BPM conference). As much as in other academic discipline, there is an ongoing debate that discusses the identity, the quality and maturity of the BPM field. In this paper, we review and summarize the major findings a larger study that will be published in the Business & Information Systems Engineering journal in 2016. In the study, we investigate the identity and progress of the BPM conference research community through an analysis of the BPM conference proceedings. Based on our findings from this analysis, we formulate recommendations to further develop the conference community in terms of methodological advance, quality, impact and progression.
Resumo:
A direct method of preparing cast aluminium alloy-graphite particle composites using uncoated graphite particles is reported. The method consists of introducing and dispersing uncoated but suitably pretreated graphite particles in aluminium alloy melts, and casting the resulting composite melts in suitable permanent moulds. The optical pretreatment required for the dispersion of the uncoated graphite particles in aluminium alloy melts consists of heating the graphite particles to 400° C in air for 1 h just prior to their dispersion in the melts. The effects of alloying elements such as Si, Cu and Mg on the dispersability of pretreated graphite in molten aluminium have also been reported. It was found that additions of about 0.5% Mg or 5% Si significantly improve the dispersability of graphite particles in aluminium alloy melts as indicated by the high recoveries of graphite in the castings of these composites. It was also possible to disperse upto 3% graphite in LM 13 alloy melts and retain the graphite particles in a well distributed fashion in the castings using the pre-heat-treated graphite particles. The observations in this study have been related to the information presently available on wetting between graphite and molten aluminium in the presence of different elements and our own thermogravimetric analysis studies on graphite particles. Physical and mechanical properties of LM 13-3% graphite composite made using pre-heat-treated graphite powder, were found to be adequate for many applications, including pistons which have been successfully used in internal combustion engines.
Resumo:
Considers the magnetic response of a charged Brownian particle undergoing a stochastic birth-death process. The latter simulates the electron-hole pair production and recombination in semiconductors. The authors obtain non-zero, orbital diamagnetism which can be large without violating the Van Leeuwen theorem (1921).
Resumo:
The aim of this study was to examine the actions of geographically dispersed process stakeholders (doctors, community pharmacists and RACFs) in order to cope with the information silos that exist within and across different settings. The study setting involved three metropolitan RACFs in Sydney, Australia and employed a qualitative approach using semi-structured interviews, non-participant observations and artefact analysis. Findings showed that medication information was stored in silos which required specific actions by each setting to translate this information to fit their local requirements. A salient example of this was the way in which community pharmacists used the RACF medication charts to prepare residents' pharmaceutical records. This translation of medication information across settings was often accompanied by telephone or face-to-face conversations to cross-check, validate or obtain new information. Findings highlighted that technological interventions that work in silos can negatively impact the quality of medication management processes in RACF settings. The implementation of commercial software applications like electronic medication charts need to be appropriately integrated to satisfy the collaborative information requirements of the RACF medication process.
Resumo:
This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.
Resumo:
In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.
Resumo:
An important question which has to be answered in evaluting the suitability of a microcomputer for a control application is the time it would take to execute the specified control algorithm. In this paper, we present a method of obtaining closed-form formulas to estimate this time. These formulas are applicable to control algorithms in which arithmetic operations and matrix manipulations dominate. The method does not require writing detailed programs for implementing the control algorithm. Using this method, the execution times of a variety of control algorithms on a range of 16-bit mini- and recently announced microcomputers are calculated. The formulas have been verified independently by an analysis program, which computes the execution time bounds of control algorithms coded in Pascal when they are run on a specified micro- or minicomputer.