965 resultados para scale free
Resumo:
Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.
Resumo:
This paper discusses the data collection technique used to determine the skills and knowledge required of academic librarians working in a digital library environment in Australia. The research was undertaken as part of the researcher’s master’s thesis conducted at Tallinn University. The data collection instrument used was a freely available online survey tool, and its advantages and disadvantages are discussed in terms of the desired outcomes and circumstances surrounding the thesis project. Decisions regarding the design of the questionnaire are also discussed.
Resumo:
We report rapid and ultra-sensitive detection system for 2,4,6-trinitrotoluene (TNT) using unmodified gold nanoparticles and surface-enhanced Raman spectroscopy (SERS). First, Meisenheimer complex has been formed in aqueous solution between TNT and cysteamine in less than 15 min of mixing. The complex formation is confirmed by the development of a pink colour and a new UV–vis absorption band around 520 nm. Second, the developed Meisenheimer complex is spontaneously self-assembled onto unmodified gold nanoparticles through a stable Au–S bond between the cysteamine moiety and the gold surface. The developed mono layer of cysteamine-TNT is then screened by SERS to detect and quantify TNT. Our experimental results demonstrate that the SERS-based assay provide an ultra-sensitive approach for the detection of TNT down to 22.7 ng/L. The unambiguous fingerprint identification of TNT by SERS represents a key advantage for our proposed method. The new method provides high selectivity towards TNT over 2,4 DNT and picric acid. Therefore it satisfies the practical requirements for the rapid screening of TNT in real life samples where the interim 24-h average allowable concentration of TNT in waste water is 0.04 mg/L.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
Background The use of mobile apps for health and well being promotion has grown exponentially in recent years. Yet, there is currently no app-quality assessment tool beyond “star”-ratings. Objective The objective of this study was to develop a reliable, multidimensional measure for trialling, classifying, and rating the quality of mobile health apps. Methods A literature search was conducted to identify articles containing explicit Web or app quality rating criteria published between January 2000 and January 2013. Existing criteria for the assessment of app quality were categorized by an expert panel to develop the new Mobile App Rating Scale (MARS) subscales, items, descriptors, and anchors. There were sixty well being apps that were randomly selected using an iTunes search for MARS rating. There were ten that were used to pilot the rating procedure, and the remaining 50 provided data on interrater reliability. Results There were 372 explicit criteria for assessing Web or app quality that were extracted from 25 published papers, conference proceedings, and Internet resources. There were five broad categories of criteria that were identified including four objective quality scales: engagement, functionality, aesthetics, and information quality; and one subjective quality scale; which were refined into the 23-item MARS. The MARS demonstrated excellent internal consistency (alpha = .90) and interrater reliability intraclass correlation coefficient (ICC = .79). Conclusions The MARS is a simple, objective, and reliable tool for classifying and assessing the quality of mobile health apps. It can also be used to provide a checklist for the design and development of new high quality health apps.
Resumo:
Background Depression is a common psychiatric disorder in older people. The study aimed to examine the screening accuracy of the Geriatric Depression Scale (GDS) and the Collateral Source version of the Geriatric Depression Scale (CS-GDS) in the nursing home setting. Methods Eighty-eight residents from 14 nursing homes were assessed for depression using the GDS and the CS-GDS, and validated against clinician diagnosed depression using the Semi-structured Clinical Diagnostic Interview for DSM-IV-TR Axis I Disorders (SCID) for residents without dementia and the Provisional Diagnostic Criteria for Depression in Alzheimer Disease (PDCdAD) for those with dementia. The screening performances of five versions of the GDS (30-, 15-, 10-, 8-, and 4-item) and two versions of the CS-GDS (30- and 15-item) were analyzed using receiver operating characteristic (ROC) curves. Results Among residents without dementia, both the self-rated (AUC = 0.75–0.79) and proxy-rated (AUC = 0.67) GDS variations performed significantly better than chance in screening for depression. However, neither instrument adequately identified depression among residents with dementia (AUC between 0.57 and 0.70). Among the GDS variations, the 4- and 8-item scales had the highest AUC and the optimal cut-offs were >0 and >3, respectively. Conclusions The validity of the GDS in detecting depression requires a certain level of cognitive functioning. While the CS-GDS is designed to remedy this issue by using an informant, it did not have adequate validity in detecting depression among residents with dementia. Further research is needed on informant selection and other factors that can potentially influence the validity of proxy-based measures in the nursing home setting.
Resumo:
Existing field data for Rangal coals (Late Permian) of the Bowen Basin, Queensland, Australia, are inconsistent with the depositional model generally accepted in the current geological literature to explain coal deposition. Given the apparent unsuitability of the current depositional model to the Bowen Basin coal data, a new depositional model, here named the Cyclic Salinity Model, is proposed and tested in this study.
Resumo:
Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.
Resumo:
Kimberlite terminology remains problematic because both descriptive and genetic terms are mixed together in most existing terminology schemes. In addition, many terms used in existing kimberlite terminology schemes are not used in mainstream volcanology, even though kimberlite bodies are commonly the remains of kimberlite volcanic vents and edifices. We build on our own recently published approach to kimberlite facies terminology, involving a systematic progression from descriptive to genetic. The scheme can be used for both coherent kimberlite (i.e. kimberlite that was emplaced without undergoing any fragmentation processes and therefore preserving coherent igneous textures) and fragmental kimberlites. The approach involves documentation of components, textures and assessing the degree and effects of alteration on both components and original emplacement textures. This allows a purely descriptive composite component, textural and compositional petrological rock or deposit name to be constructed first, free of any biases about emplacement setting and processes. Then important facies features such as depositional structures, contact relationships and setting are assessed, leading to a composite descriptive and genetic name for the facies or rock unit that summarises key descriptive characteristics, emplacement processes and setting. Flow charts summarising the key steps in developing a progressive descriptive to genetic terminology are provided for both coherent and fragmental facies/deposits/rock units. These can be copied and used in the field, or in conjunction with field (e.g. drill core observations) and petrographic data. Because the approach depends heavily on field scale observations, characteristics and process interpretations, only the first descriptive part is appropriate where only petrographic observations are being made. Where field scale observations are available the progression from developing descriptive to interpretative terminology can be used, especially where some petrographic data also becomes available.
Resumo:
Erythropoietin (EPO), a glycoprotein hormone of ∼34 kDa, is an important hematopoietic growth factor, mainly produced in the kidney and controls the number of red blood cells circulating in the blood stream. Sensitive and rapid recombinant human EPO (rHuEPO) detection tools that improve on the current laborious EPO detection techniques are in high demand for both clinical and sports industry. A sensitive aptamer-functionalized biosensor (aptasensor) has been developed by controlled growth of gold nanostructures (AuNS) over a gold substrate (pAu/AuNS). The aptasensor selectively binds to rHuEPO and, therefore, was used to extract and detect the drug from horse plasma by surface enhanced Raman spectroscopy (SERS). Due to the nanogap separation between the nanostructures, the high population and distribution of hot spots on the pAu/AuNS substrate surface, strong signal enhancement was acquired. By using wide area illumination (WAI) setting for the Raman detection, a low RSD of 4.92% over 150 SERS measurements was achieved. The significant reproducibility of the new biosensor addresses the serious problem of SERS signal inconsistency that hampers the use of the technique in the field. The WAI setting is compatible with handheld Raman devices. Therefore, the new aptasensor can be used for the selective extraction of rHuEPO from biological fluids and subsequently screened with handheld Raman spectrometer for SERS based in-field protein detection.
Resumo:
On the 18th of July 2013, three hundred local members of Gladstone, Queensland erupted into song and dance performing the fraught history of their community harbourside through tug boat ballets, taiko drumming, German bell ringing and BMX bike riding. Over 17,500 people attended the four performances of Boomtown, a Queensland Music Festival event. This was the largest regional, outdoor community-engaged musical performance staged in Australia. The narrative moved beyond the dominant, pejorative view of Gladstone as an industrial town to include the community members’ sense of purpose and aspirations. It was a celebratory, contentious and ambitious project that sought to disrupt the traditional conventions of performance-making through working in artistically democratic ways. This article explores the potential for Australian Community Engaged Arts (CEA) projects such as Boomtown to democratically engage community members and co-create culturally meaningful work within a community. Research into CEA projects rarely consider how the often delicate conversations between practitioners and the community work. The complex processes of finding and co-writing the narrative, casting, and rehearsing Boomtown are discussed with reference to artistic director/dramaturge Sean Mee’s innovative approaches. Boomtown began with and concluded with community conversations. Skilful negotiation ensured congruence between the townspeople’s stories and the “community story” presented on stage, abrogating potential problems of narrative ownership. To supplement the research, twenty-one personal interviews were undertaken with Gladstone community members invested in the production before, during and after the project: performers, audience members and local professionals. The stories shared and emphasised in the theatricalised story were based on propitious, meaningful, local stories from lived experiences rather than preconceived, trivial or tokenistic matters, and were underpinned by a consensus formed on what was in the best interests of the majority of community members. Boomtown exposed hidden issues in the community and gave voice to thoughts, feelings and concerns which triggered not just engagement, but honest conversation within the community.
Resumo:
Public buildings and large infrastructure are typically monitored by tens or hundreds of cameras, all capturing different physical spaces and observing different types of interactions and behaviours. However to date, in large part due to limited data availability, crowd monitoring and operational surveillance research has focused on single camera scenarios which are not representative of real-world applications. In this paper we present a new, publicly available database for large scale crowd surveillance. Footage from 12 cameras for a full work day covering the main floor of a busy university campus building, including an internal and external foyer, elevator foyers, and the main external approach are provided; alongside annotation for crowd counting (single or multi-camera) and pedestrian flow analysis for 10 and 6 sites respectively. We describe how this large dataset can be used to perform distributed monitoring of building utilisation, and demonstrate the potential of this dataset to understand and learn the relationship between different areas of a building.
Resumo:
Outlines some of the potential risks or actual harms that result from large-scale land leases or acquisitions and the relevant human rights and environmental law principles.
Resumo:
Sherlock Holmes faces his greatest challenge – since his fight to the death with Professor James Moriarty at Reichenbach Falls. Who owns Sherlock Holmes, the world’s greatest detective? Is it the estate of Sir Arthur Conan Doyle? Or the mysterious socialite Andrea Plunket? Or does Sherlock Holmes belong to the public? This is the question currently being debated in copyright litigation in the United States courts, raising larger questions about copyright law and the public domain, the ownership of literary characters, and the role of sequels, adaptations, and mash-ups.