893 resultados para lab computers
Resumo:
Introduction QC, EQA and method evaluation are integral to delivery of quality patient results. To ensure QUT graduates have a solid grounding in these key areas of practice, a theory-to-practice approach is used to progressively develop and consolidate these skills. Methods Using a BCG assay for serum albumin, each student undertakes an eight week project analysing two levels of QC alongside ‘patient’ samples. Results are assessed using both single rules and Multirules. Concomitantly with the QC analyses, an EQA project is undertaken; students analyse two EQA samples, twice in the semester. Results are submitted using cloud software and data for the full ‘peer group’ returned to students in spreadsheets and incomplete Youden plots. Youden plots are completed with target values and calculated ALP values and analysed for ‘lab’ and method performance. The method has a low-level positive bias, which leads to the need to investigate an alternative method. Building directly on the EQA of the first project and using the scenario of a lab that services renal patients, students undertake a method validation comparing BCP and BCG assays in another eight-week project. Precision and patient comparison studies allow students to assess whether the BCP method addresses the proportional bias of the BCG method and overall is a ‘better’ alternative method for analysing serum albumin, accounting for pragmatic factors, such as cost, as well as performance characteristics. Results Students develop understanding of the purpose and importance of QC and EQA in delivering quality results, the need to optimise testing to deliver quality results and importantly, a working knowledge of the analyses that go into ensuring this quality. In parallel to developing these key workplace competencies, students become confident, competent practitioners, able to pipette accurately and precisely and organise themselves in a busy, time pressured work environment.
Resumo:
Cooked prawn colour is known to be a driver of market price and a visual indicator of product quality for the consumer. Although there is a general understanding that colour variation exists in farmed prawns, there has been no attempt to quantify this variation or identify where this variation is most prevalent. The objectives of this study were threefold: firstly to compare three different quantitative methods to measure prawn colour or pigmentation, two different colorimeters and colour quantification from digital images. Secondly, to quantify the amount of pigmentation variation that exists in farmed prawns within ponds, across ponds and across farms. Lastly, to assess the effects of ice storage or freeze-thawing of raw product prior to cooking. Each method was able to detect quantitative differences in prawn colour, although conversion of image based quantification of prawn colour from RGB to Lab was unreliable. Considerable colour variation was observed between prawns from different ponds and different farms, and this variation potentially affects product value. Different post-harvest methods prior to cooking were also shown to have a profound detrimental effect on prawn colour. Both long periods of ice storage and freeze thawing of raw product were detrimental to prawn colour. However, ice storage immediately after cooking was shown to be beneficial to prawn colour. Results demonstrated that darker prawn colour was preserved by holding harvested prawns alive in chilled seawater, limiting the time between harvesting and cooking, and avoiding long periods of ice storage or freeze thawing of uncooked product.
Resumo:
A divide-and-correct algorithm is described for multiple-precision division in the negative base number system. In this algorithm an initial quotient estimate is obtained from suitable segmented operands; this is then corrected by simple rules to arrive at the true quotient.
Resumo:
Grain protein composition determines quality traits, such as value for food, feedstock, and biomaterials uses. The major storage proteins in sorghum are the prolamins, known as kafirins. Located primarily on the periphery of the protein bodies surrounding starch, cysteine-rich beta- and gamma-kafirins may limit enzymatic access to internally positioned alpha-kafirins and starch. An integrated approach was used to characterize sorghum with allelic variation at the kafirin loci to determine the effects of this genetic diversity on protein expression. Reversed-phase high performance liquid chromatography and lab-on-a-chip analysis showed reductions in alcohol-soluble protein in beta-kafirin null lines. Gel-based separation and liquid chromatography-tandem mass spectrometry identified a range of redox active proteins affecting storage protein biochemistry. Thioredoxin, involved in the processing of proteins at germination, has reported impacts on grain digestibility and was differentially expressed across genotypes. Thus, redox states of endosperm proteins, of which kafirins are a subset, could affect quality traits in addition to the expression of proteins.
Resumo:
Algorithms are described for the basic arithmetic operations and square rooting in a negative base. A new operation called polarization that reverses the sign of a number facilitates subtraction, using addition. Some special features of the negative-base arithmetic are also mentioned.
Resumo:
Described here is a deterministic division algorithm in a negative-base number system; here, the divisor is mapped into a suitable range by premultiplication, so that the choice of the quotient digit is deterministic.
Resumo:
A numerical procedure, based on the parametric differentiation and implicit finite difference scheme, has been developed for a class of problems in the boundary-layer theory for saddle-point regions. Here, the results are presented for the case of a three-dimensional stagnation-point flow with massive blowing. The method compares very well with other methods for particular cases (zero or small mass blowing). Results emphasize that the present numerical procedure is well suited for the solution of saddle-point flows with massive blowing, which could not be solved by other methods.
Resumo:
A new technology – 3D printing – has the potential to make radical changes to aspects of the way in which we live. Put simply, it allows people to download designs and turn them into physical objects by laying down successive layers of material. Replacements or parts for household objects such as toys, utensils and gadgets could become available at the press of a button. With this innovation, however, comes the need to consider impacts on a wide range of forms of intellectual property, as Dr Matthew Rimmer explains. 3D Printing is the latest in a long line of disruptive technologies – including photocopiers, cassette recorders, MP3 players, personal computers, peer to peer networks, and wikis – which have challenged intellectual property laws, policies, practices, and norms. As The Economist has observed, ‘Tinkerers with machines that turn binary digits into molecules are pioneering a whole new way of making things—one that could well rewrite the rules of manufacturing in much the same way as the PC trashed the traditional world of computing.’
Resumo:
Single pulse shock tube facility has been developed in the High Temperature Chemical Kinetics Lab, Aerospace Engineering Department, to carry out ignition delay studies and spectroscopic investigations of hydrocarbon fuels. Our main emphasis is on measuring ignition delay through pressure rise and by monitoring CH emission for various jet fuels and finding suitable additives for reducing the delay. Initially the shock tube was tested and calibrated by measuring the ignition delay of C2H6-O2 mixture. The results are in good agreement with earlier published works. Ignition times of exo-tetrahdyrodicyclopentadiene (C10H16), which is a leading candidate fuel for scramjet propulsion has been studied in the reflected shock region in the temperature range 1250 - 1750 K with and without adding Triethylamine (TEA). Addition of TEA results in substantial reduction of ignition delay of C10H16.
Resumo:
Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.
Resumo:
In recent years, XML has been widely adopted as a universal format for structured data. A variety of XML-based systems have emerged, most prominently SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This popularity is helped by the excellent support for XML processing in many programming languages and by the variety of XML-based technologies for more complex needs of applications. Concurrently with this rise of XML, there has also been a qualitative expansion of the Internet's scope. Namely, mobile devices are becoming capable enough to be full-fledged members of various distributed systems. Such devices are battery-powered, their network connections are based on wireless technologies, and their processing capabilities are typically much lower than those of stationary computers. This dissertation presents work performed to try to reconcile these two developments. XML as a highly redundant text-based format is not obviously suitable for mobile devices that need to avoid extraneous processing and communication. Furthermore, the protocols and systems commonly used in XML messaging are often designed for fixed networks and may make assumptions that do not hold in wireless environments. This work identifies four areas of improvement in XML messaging systems: the programming interfaces to the system itself and to XML processing, the serialization format used for the messages, and the protocol used to transmit the messages. We show a complete system that improves the overall performance of XML messaging through consideration of these areas. The work is centered on actually implementing the proposals in a form usable on real mobile devices. The experimentation is performed on actual devices and real networks using the messaging system implemented as a part of this work. The experimentation is extensive and, due to using several different devices, also provides a glimpse of what the performance of these systems may look like in the future.
Resumo:
This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
The Printed Circuit Board (PCB) layout design is one of the most important and time consuming phases during equipment design process in all electronic industries. This paper is concerned with the development and implementation of a computer aided PCB design package. A set of programs which operate on a description of the circuit supplied by the user in the form of a data file and subsequently design the layout of a double-sided PCB has been developed. The algorithms used for the design of the PCB optimise the board area and the length of copper tracks used for the interconnections. The output of the package is the layout drawing of the PCB, drawn on a CALCOMP hard copy plotter and a Tektronix 4012 storage graphics display terminal. The routing density (the board area required for one component) achieved by this package is typically 0.8 sq. inch per IC. The package is implemented on a DEC 1090 system in Pascal and FORTRAN and SIGN(1) graphics package is used for display generation.