964 resultados para machine-tools


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of urban morphology has become an expanding field of research within the architectural discipline, providing theories to be used as tools in the understanding and design of urban landscapes from the past, the present and into the future. Drawing upon contemporary architectural design theory, this investigation reveals what a sectional analysis of an urban landscape can add to the existing research methods within this field. This paper conducts an enquiry into the use of the section as a tool for urban morphological analysis. Following the methodology of the British school of urban morphology, sections through the urban fabric of the case study city of Brisbane are compared. The results are categorised to depict changes in scale, components and utilisation throughout various timeframes. The key findings illustrate how the section, when read in conjunction with the plan can be used to interpret changes to urban form and the relationship that this has to the quality of the urban environment in the contemporary city.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective Although several validated nutritional screening tools have been developed to “triage” inpatients for malnutrition diagnosis and intervention, there continues to be debate in the literature as to which tool/tools clinicians should use in practice. This study compared the accuracy of seven validated screening tools in older medical inpatients against two validated nutritional assessment methods. Methods This was a prospective cohort study of medical inpatients at least 65 y old. Malnutrition screening was conducted using seven tools recommended in evidence-based guidelines. Nutritional status was assessed by an accredited practicing dietitian using the Subjective Global Assessment (SGA) and the Mini-Nutritional Assessment (MNA). Energy intake was observed on a single day during first week of hospitalization. Results In this sample of 134 participants (80 ± 8 y old, 50% women), there was fair agreement between the SGA and MNA (κ = 0.53), with MNA identifying more “at-risk” patients and the SGA better identifying existing malnutrition. Most tools were accurate in identifying patients with malnutrition as determined by the SGA, in particular the Malnutrition Screening Tool and the Nutritional Risk Screening 2002. The MNA Short Form was most accurate at identifying nutritional risk according to the MNA. No tool accurately predicted patients with inadequate energy intake in the hospital. Conclusion Because all tools generally performed well, clinicians should consider choosing a screening tool that best aligns with their chosen nutritional assessment and is easiest to implement in practice. This study confirmed the importance of rescreening and monitoring food intake to allow the early identification and prevention of nutritional decline in patients with a poor intake during hospitalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Material for this paper comes from as report commissioned by the Department of Family Services, Aboriginal and Islander Affairs. The report is the result of a multi strategy research project designed to assess the impact of gaming machines on the fundraising capacity of charitable and community organisations in Queensland. The study was conducted during the 1993 calendar year. The first Queensland gaming machine was commissioned on the 11 February, 1992 at 11.30 am in Brisbane at the Kedron Wavell Services Club. Eighteen more clubs followed that week. Six months later there were gaming machines in 335 clubs, and 250 hotels and taverns, representing a state wide total of 7,974 machines in operation. The 10,000 gaming machine was commissioned on the 18 March, 1993 and the 1,000 operational gaming machine site was opened on 18th February, 1994.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sustainability issues in built environment have attracted an increasingly level of attention from both the general public and the industry. As a result, a number of green building assessment tools have been developed such as the Leadership in Energy and Environmental Design (LEED) and the BRE Environmental Assessment Method (BREEAM), etc. This paper critically reviewed the assessment tools developed in Australian context, i.e. the Green Star rating tools developed by the Green Building Council of Australia. A particular focus is given to the recent developments of these assessment tools. The results showed that the office buildings take the biggest share of Green Star rated buildings. Similarly, sustainable building assessments seem to be more performance oriented which focuses on the operation stage of buildings. In addition, stakeholder engagement during the decision making process is encouraged. These findings provide useful references to the development of next generation of sustainable building assessment tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During an intensive design-led workshop multidisciplinary design teams examined options for a sustainable multi-residential tower on an inner urban site in Brisbane (Australia). The main aim was to demonstrate the key principles of daylight to every habitable room and cross-ventilation to every apartment in the subtropical climate while responding to acceptable yield and price points. The four conceptual design proposals demonstrated a wide range of outcomes, with buildings ranging from 15 to 30 storeys. Daylight Factor (DF), view to the outside, and the avoidance of direct sunlight were the only quantitative and qualitative performance metrics used to implement daylighting to the proposed buildings during the charrette. This paper further assesses the daylighting performance of the four conceptual designs by utilizing Climate-based daylight modeling (CBDM), specifically Daylight Autonomy (DA) and Useful Daylight Illuminance (UDI). Results show that UDI 100-2000lux calculations provide more useful information on the daylighting design than DF. The percentage of the space with a UDI <100-2000lux larger than 50% ranged from 77% to 86% of the time for active occupant behaviour (occupancy from 6am to 6pm). The paper also highlights the architectural features that mostly affect daylighting design in subtropical climates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses first year students’ responses and outcomes to the integration of digital technologies in their second semester foundational visualisation class; ‘Visualisation II’. As the second class in the Visualisation series, previous analogue knowledge taught in ‘Visualisation I’ is compounded with new digital technologies establishing the introduction to a myriad of hybrid visualisation tools and techniques for design exploration and design artefact. This research examines the differentiation between analogue and digital design, common precedents of the two, and reflects upon the environment and class structure with the learning experiences and confidence of surveyed participants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years there has been a large emphasis placed on the need to use Learning Management Systems (LMS) in the field of higher education, with many universities mandating their use. An important aspect of these systems is their ability to offer collaboration tools to build a community of learners. This paper reports on a study of the effectiveness of an LMS (Blackboard©) in a higher education setting and whether both lecturers and students voluntarily use collaborative tools for teaching and learning. Interviews were conducted with participants (N=67) from the faculties of Science and Technology, Business, Health and Law. Results from this study indicated that participants often use Blackboard© as an online repository of learning materials and that the collaboration tools of Blackboard© are often not utilised. The study also found that several factors have inhibited the use and uptake of the collaboration tools within Blackboard©. These have included structure and user experience, pedagogical practice, response time and a preference for other tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation, and can also improve productivity and enhance system safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and an assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. All machine components are subjected to degradation processes in real environments and they have certain failure characteristics which can be related to the operating conditions. This paper describes a technique for accurate assessment of the remnant life of machines based on health state probability estimation and involving historical knowledge embedded in the closed loop diagnostics and prognostics systems. The technique uses a Support Vector Machine (SVM) classifier as a tool for estimating health state probability of machine degradation, which can affect the accuracy of prediction. To validate the feasibility of the proposed model, real life historical data from bearings of High Pressure Liquefied Natural Gas (HP-LNG) pumps were analysed and used to obtain the optimal prediction of remaining useful life. The results obtained were very encouraging and showed that the proposed prognostic system based on health state probability estimation has the potential to be used as an estimation tool for remnant life prediction in industrial machinery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Control of biospecimen quality that is linked to processing is one of the goals of biospecimen science. Consensus is lacking, however, regarding optimal sample quality-control (QC) tools (ie, markers and assays). The aim of this review was to identify QC tools, both for fluid and solid-tissue samples, based on a comprehensive and critical literature review. The most readily applicable tools are those with a known threshold for the preanalytical variation and a known reference range for the QC analyte. Only a few meaningful markers were identified that meet these criteria, such as CD40L for assessing serum exposure at high temperatures and VEGF for assessing serum freeze-thawing. To fully assess biospecimen quality, multiple QC markers are needed. Here we present the most promising biospecimen QC tools that were identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous environmental rating tools have developed around the world over the past decade or so, in an attempt to increase awareness of the impact buildings have on the environment. Whilst many of these tools can be applied across a variety of building types, the majority focus mainly on the commercial building sector. Only recently have some of the better known environmental rating tools become adaptable to the land development sector, where arguably the most visible environmental impacts are made. EnviroDevelopment is one such tool that enables rating of residential land development in Australia. This paper seeks to quantify the environmental benefits achieved by the environmental rating tool EnviroDevelopment, using data from its certified residential projects across Australia. This research will identify the environmental gains achieved in the residential land development sector that can be attributed to developers aspiring to gain certification under this rating tool.