69 resultados para Alkaline extraction and molbydate blue spectrophotometry


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adsorptions of Rhodamine B (RhB) and Basic Blue 9 (BB9, also known as methylene blue) by sugarcane bagasse of different surface areas were compared in this study. There was a small gain in the amount of dye removed by increasing bagasse surface area from 0.57 m2/g to 1.81 m2/g. BB9 adsorption was less sensitive to surface area change than RhB adsorption. Adsorption capacity of 250 mg/L RhB on 1 g/L bagasse was 65.5 mg/g compared to a value of 30.7 mg/g obtained with BB9 under the same conditions. Increasing adsorption temperature (from 30 °C to 50 °C) while having no effect on RhB adsorption, slightly decreased BB9 adsorption by ~4%. The differences in adsorption performances between these dyes have been related to the molecular structure of the dyes and the surface chemistry of bagasse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: This study investigates the clinical utility of the melanopsin expressing intrinsically photosensitive retinal ganglion cell (ipRGC) controlled post-illumination pupil response (PIPR) as a novel technique for documenting inner retinal function in patients with Type II diabetes without diabetic retinopathy. Methods: The post-illumination pupil response (PIPR) was measured in seven patients with Type II diabetes, normal retinal nerve fiber thickness and no diabetic retinopathy. A 488 nm and 610 nm, 7.15º diameter stimulus was presented in Maxwellian view to the right eye and the left consensual pupil light reflex was recorded. Results: The group data for the blue PIPR (488 nm) identified a trend of reduced ipRGC function in patients with diabetes with no retinopathy. The transient pupil constriction was lower on average in the diabetic group. The relationship between duration of diabetes and the blue PIPR amplitude was linear, suggesting that ipRGC function decreases with increasing diabetes duration. Conclusion: This is the first report to show that the ipRGC controlled post-illumination pupil response may have clinical applications as a non-invasive technique for determining progression of inner neuroretinal changes in patients with diabetes before they are ophthalmoscopically or anatomically evident. The lower transient pupil constriction amplitude indicates that outer retinal photoreceptor inputs to the pupil light reflex may also be affected in diabetes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design and construction community has shown increasing interest in adopting building information models (BIMs). The richness of information provided by BIMs has the potential to streamline the design and construction processes by enabling enhanced communication, coordination, automation and analysis. However, there are many challenges in extracting construction-specific information out of BIMs. In most cases, construction practitioners have to manually identify the required information, which is inefficient and prone to error, particularly for complex, large-scale projects. This paper describes the process and methods we have formalized to partially automate the extraction and querying of construction-specific information from a BIM. We describe methods for analyzing a BIM to query for spatial information that is relevant for construction practitioners, and that is typically represented implicitly in a BIM. Our approach integrates ifcXML data and other spatial data to develop a richer model for construction users. We employ custom 2D topological XQuery predicates to answer a variety of spatial queries. The validation results demonstrate that this approach provides a richer representation of construction-specific information compared to existing BIM tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Building information modeling (BIM) is an emerging technology and process that provides rich and intelligent design information models of a facility, enabling enhanced communication, coordination, analysis, and quality control throughout all phases of a building project. Although there are many documented benefits of BIM for construction, identifying essential construction-specific information out of a BIM in an efficient and meaningful way is still a challenging task. This paper presents a framework that combines feature-based modeling and query processing to leverage BIM for construction. The feature-based modeling representation implemented enriches a BIM by representing construction-specific design features relevant to different construction management (CM) functions. The query processing implemented allows for increased flexibility to specify queries and rapidly generate the desired view from a given BIM according to the varied requirements of a specific practitioner or domain. Central to the framework is the formalization of construction domain knowledge in the form of a feature ontology and query specifications. The implementation of our framework enables the automatic extraction and querying of a wide-range of design conditions that are relevant to construction practitioners. The validation studies conducted demonstrate that our approach is significantly more effective than existing solutions. The research described in this paper has the potential to improve the efficiency and effectiveness of decision-making processes in different CM functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Optimal adherence to antiretroviral therapy (ART) is necessary for people living with HIV/AIDS (PLHIV). There have been relatively few systematic analyses of factors that promote or inhibit adherence to antiretroviral therapy among PLHIV in Asia. This study assessed ART adherence and examined factors associated with suboptimal adherence in northern Viet Nam. Methods: Data from 615 PLHIV on ART in two urban and three rural outpatient clinics were collected by medical record extraction and from patient interviews using audio computer-assisted self-interview (ACASI). Results: The prevalence of suboptimal adherence was estimated to be 24.9% via a visual analogue scale (VAS) of past-month dose-missing and 29.1% using a modified Adult AIDS Clinical Trial Group scale for on-time dose-taking in the past 4 days. Factors significantly associated with the more conservative VAS score were: depression (p < 0.001), side-effect experiences (p < 0.001), heavy alcohol use (p = 0.001), chance health locus of control (p = 0.003), low perceived quality of information from care providers (p = 0.04) and low social connectedness (p = 0.03). Illicit drug use alone was not significantly associated with suboptimal adherence, but interacted with heavy alcohol use to reduce adherence (p < 0.001). Conclusions: This is the largest survey of ART adherence yet reported from Asia and the first in a developing country to use the ACASI method in this context. The evidence strongly indicates that ART services in Viet Nam should include screening and treatment for depression, linkage with alcohol and/or drug dependence treatment, and counselling to address the belief that chance or luck determines health outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated potential markers within chromosomal, mitochondrial DNA (mtDNA) and ribosomal RNA (rRNA) with the aim of developing a DNA based method to allow differentiation between animal species. Such discrimination tests may have important applications in the forensic science, agriculture, quarantine and customs fields. DNA samples from five different animal individuals within the same species for 10 species of animal (including human) were analysed. DNA extraction and quantitation followed by PCR amplification and GeneScan visualisation formed the basis of the experimental analysis. Five gene markers from three different types of genes were investigated. These included genomic markers for the β-actin and TP53 tumor suppressor gene. Mitochondrial DNA markers, designed by Bataille et al. [Forensic Sci. Int. 99 (1999) 165], examined the Cytochrome b gene and Hypervariable Displacement Loop (D-Loop) region. Finally, a ribosomal RNA marker for the 28S rRNA gene optimised by Naito et al. [J. Forensic Sci. 37 (1992) 396] was used as a possible marker for speciation. Results showed a difference of only several base pairs between all species for the β-actin and 28S markers, with the exception of Sus scrofa (pig) β-actin fragment length, which produced a significantly smaller fragment. Multiplexing of Cytochrome b and D-Loop markers gave limited species information, although positive discrimination of human DNA was evident. The most specific and discriminatory results were shown using the TP53 gene since this marker produced greatest fragment size differences between animal species studied. Sample differentiation for all species was possible following TP53 amplification, suggesting that this gene could be used as a potential animal species identifier.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An accurate PV module electrical model is presented based on the Shockley diode equation. The simple model has a photo-current current source, a single diode junction and a series resistance, and includes temperature dependences. The method of parameter extraction and model evaluation in Matlab is demonstrated for a typical 60W solar panel. This model is used to investigate the variation of maximum power point with temperature and isolation levels. A comparison of buck versus boost maximum power point tracker (MPPT) topologies is made, and compared with a direct connection to a constant voltage (battery) load. The boost converter is shown to have a slight advantage over the buck, since it can always track the maximum power point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An accurate PV module electrical model is presented based on the Shockley diode equation. The simple model has a photo-current current source, a single diode junction and a series resistance, and includes temperature dependences. The method of parameter extraction and model evaluation in Matlab is demonstrated for a typical 60W solar panel. This model is used to investigate the variation of maximumpower point with temperature and insolation levels. A comparison of buck versus boostmaximum power point tracker (MPPT) topologies is made, and compared with a direct connection to a constant voltage (battery) load. The boost converter is shown to have a slight advantage over the buck, since it can always track the maximum power point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the electrodeposition of polycrystalline gold in aqueous solution is important from the viewpoint that in electrocatalysis applications ill-defined micro- and nanostructured surfaces are often employed. In this work, the morphology of gold was controlled by the electrodeposition potential and the introduction of Pb(CH3COO)2•3H2O into the plating solution to give either smooth or nanostructured gold crystallites or large dendritic structures which have been characterized by scanning electron microscopy (SEM). The latter structures were achieved through a novel in situ galvanic replacement of lead with AuCl4−(aq) during the course of gold electrodeposition. The electrochemical behavior of electrodeposited gold in the double layer region was studied in acidic and alkaline media and related to electrocatalytic performance for the oxidation of hydrogen peroxide and methanol. It was found that electrodeposited gold is a significantly better electrocatalyst than a polished gold electrode; however, performance is highly dependent on the chosen deposition parameters. The fabrication of a deposit with highly active surface states, comparable to those achieved at severely disrupted metal surfaces through thermal and electrochemical methods, does not result in the most effective electrocatalyst. This is due to significant premonolayer oxidation that occurs in the double layer region of the electrodeposited gold. In particular, in alkaline solution, where gold usually shows the most electrocatalytic activity, these active surface states may be overoxidized and inhibit the electrocatalytic reaction. However, the activity and morphology of an electrodeposited film can be tailored whereby electrodeposited gold that exhibits nanostructure within the crystallites on the surface demonstrated enhanced electrocatalytic activity compared to smaller smooth gold crystallites and larger dendritic structures in potential regions well within the double layer region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1974 was the year when the Swedish pop group ABBA won the Eurovision Song Contest in Brighton and when Blue Swede reached number one on the Billboard Hot 100 in the US. Although Swedish pop music gained some international success even prior to 1974, this year is often considered as the beginning of an era in which Swedish pop music had great success around the world. With brands such as ABBA, Europe, Roxette, The Cardigans, Ace of Base, In Flames, Robyn, Avicii, Swedish House Mafia and music producers Stig Andersson, Ola Håkansson, Dag Volle, Max Martin, Andreas Carlsson, Jorgen Elofsson and several others have the myth of the Swedish music miracle kept alive for nearly more than four decades. Swedish music looks to continue reap success around the world, but since the millennium, Sweden's relationship with music has been more focused on relatively controversial Internet-based services for music distribution developed by Swedish entrepreneurs and engineers rather than on successful musicians and composers. This chapter focusses on the music industry in Sweden. The chapter will discuss the development of the Internet services mentioned above and their impact on the production, distribution and consumption of recorded music. Ample space will be given in particular to Spotify, the music service that quickly has fundamentally changed the music industry in Sweden. The chapter will also present how the music industry's three sectors - recorded music, music licensing and live music - interact and evolve in Sweden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims Pathology notification for a Cancer Registry is regarded as the most valid information for the confirmation of a diagnosis of cancer. In view of the importance of pathology data, an automatic medical text analysis system (Medtex) is being developed to perform electronic Cancer Registry data extraction and coding of important clinical information embedded within pathology reports. Methods The system automatically scans HL7 messages received from a Queensland pathology information system and analyses the reports for terms and concepts relevant to a cancer notification. A multitude of data items for cancer notification such as primary site, histological type, stage, and other synoptic data are classified by the system. The underlying extraction and classification technology is based on SNOMED CT1 2. The Queensland Cancer Registry business rules3 and International Classification of Diseases – Oncology – Version 34 have been incorporated. Results The cancer notification services show that the classification of notifiable reports can be achieved with sensitivities of 98% and specificities of 96%5, while the coding of cancer notification items such as basis of diagnosis, histological type and grade, primary site and laterality can be extracted with an overall accuracy of 80%6. In the case of lung cancer staging, the automated stages produced were accurate enough for the purposes of population level research and indicative staging prior to multi-disciplinary team meetings2 7. Medtex also allows for detailed tumour stream synoptic reporting8. Conclusions Medtex demonstrates how medical free-text processing could enable the automation of some Cancer Registry processes. Over 70% of Cancer Registry coding resources are devoted to information acquisition. The development of a clinical decision support system to unlock information from medical free-text could significantly reduce costs arising from duplicated processes and enable improved decision support, enhancing efficiency and timeliness of cancer information for Cancer Registries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction Novel imaging techniques for prostate cancer (PCa) are required to improve staging and real-time assessment of therapeutic response. We performed preclinical evaluation of newly-developed, biocompatible magnetic nanoparticles (MNPs) conjugated with J591, an antibody specific for prostate specific membrane antigen (PSMA), to enhance magnetic resonance imaging (MRI) of PCa. PSMA is expressed on ∼90% of PCa, including those that are castrate-resistant, rendering it as a rational target for PCa imaging. Materials and Methods The specificity of J591 for PSMA was confirmed by flow cytometric analysis of several PCa cell lines of known PSMA status. MNPs were prepared, engineered to the appropriate size, labeled with DiR fluorophore, and their toxicity to a panel of PC cells was assessed by in vitro Alamar Blue assay. Immunohistochemistry, fluorescence microscopy and Prussian Blue staining (iron uptake) were used to evaluate PSMA specificity of J591-MNP conjugates. In vivo MRI studies (16.4T MRI system) were performed using live immunodeficient mice bearing orthotopic LNCaP xenografts and injected intravenously with J591-MNPs or MNPs alone. Results MNPs were non-toxic to PCa cells. J591-MNP conjugates showed no compromise in specificity of binding to PSMA+ cells and showed enhanced iron uptake compared with MNPs alone. In vivo, tumour targeting (significant MR image contrast) was evident in mice injected with J591-MNPs, but not MNPs alone. Resected tumours from targeted mice had an accumulation of MNPs, not seen in normal control prostate. Conclusions Application of PSMA-targeting MNPs into conventional MRI has potential to enhance PCa detection and localization in real-time, improving patient management.