208 resultados para Solvent-free


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Major imperfections in crosslinked polymers include loose or dangling chain ends that lower the crosslink d., thereby reducing elastic recovery and increasing the solvent swelling. These imperfections are hard to detect, quantify and control when the network is initiated by free radical reactions. As an alternative approach, the sol-​gel synthesis of a model poly(ethylene glycol) (PEG-​2000) network is described using controlled amts. of bis- and mono-​triethoxy silyl Pr urethane PEG precursors to give silsesquioxane (SSQ, R-​SiO1.5) structures as crosslink junctions with a controlled no. of dangling chains. The effect of the no. of dangling chains on the structure and connectivity of the dried SSQ networks has been detd. by step-​crystn. differential scanning calorimetry. The role that micelle formation plays in controlling the sol-​gel PEG network connectivity has been studied by dynamic light scattering of the bis- and mono-​triethoxy silyl precursors and the networks have been characterized by 29Si solid state NMR, sol fraction and swelling measurements. These show that the dangling chains will increase the mesh size and water uptake. Compared to other end-​linked PEG hydrogels, the SSQ-​crosslinked networks show a low sol fraction and high connectivity, which reduces solvent swelling, degree of crystallinity and the crystal transition temp. The increased degree of freedom in segment movement on the addn. of dangling chains in the SSQ-​crosslinked network facilitates the packing process in crystn. of the dry network and, in the hydrogel, helps to accommodate more water mols. before reaching equil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cell-to-cell adhesion is an important aspect of malignant spreading that is often observed in images from the experimental cell biology literature. Since cell-to-cell adhesion plays an important role in controlling the movement of individual malignant cells, it is likely that cell-to-cell adhesion also influences the spatial spreading of populations of such cells. Therefore, it is important for us to develop biologically realistic simulation tools that can mimic the key features of such collective spreading processes to improve our understanding of how cell-to-cell adhesion influences the spreading of cell populations. Previous models of collective cell spreading with adhesion have used lattice-based random walk frameworks which may lead to unrealistic results, since the agents in the random walk simulations always move across an artificial underlying lattice structure. This is particularly problematic in high-density regions where it is clear that agents in the random walk align along the underlying lattice, whereas no such regular alignment is ever observed experimentally. To address these limitations, we present a lattice-free model of collective cell migration that explicitly incorporates crowding and adhesion. We derive a partial differential equation description of the discrete process and show that averaged simulation results compare very well with numerical solutions of the partial differential equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Pretreatment of lignocellulosic biomass is a prerequisite for effective saccharification to produce fermentable sugars. We have previously reported an effective low temperature (90 °C) process at atmospheric pressure for pretreatment of sugarcane bagasse with acidified mixtures of ethylene carbonate (EC) and ethylene glycol (EG). In this study, “greener” solvent systems based on acidified mixtures of glycerol carbonate (GC) and glycerol were used to treat sugarcane bagasse and the roles of each solvent in deconstructing biomass were determined. Results Pretreatment of sugarcane bagasse at 90 °C for only 30 min with acidified GC produced a solid residue having a glucan digestibility of 90% and a glucose yield of 80%, which were significantly higher than a glucan digestibility of 16% and a glucose yield of 15% obtained for bagasse pretreated with acidified EC. Biomass compositional analyses showed that GC pretreatment removed more lignin than EC pretreatment (84% vs 54%). Scanning electron microscopy (SEM) showed that fluffy and size-reduced fibres were produced from GC pretreatment whereas EC pretreatment produced compact particles of reduced size. The maximal glucan digestibility and glucose yield of GC/glycerol systems were about 7% lower than those of EC/ethylene glycol (EG) systems. Replacing up to 50 wt% of GC with glycerol did not negatively affect glucan digestibility and glucose yield. The results from pretreatment of microcrystalline cellulose (MCC) showed that (1) pretreatment with acidified alkylene glycol (AG) alone increased enzymatic digestibility compared to pretreatments with acidified alkylene carbonate (AC) alone and acidified mixtures of AC and AG, (2) pretreatment with acidified GC alone slightly increased, but with acidified EC alone significantly decreased, enzymatic digestibility compared to untreated MCC, and (3) there was a good positive linear correlation of enzymatic digestibility of treated and untreated MCC samples with congo red (CR) adsorption capacity. Conclusions Acidified GC alone was a more effective solvent for pretreatment of sugarcane bagasse than acidified EC alone. The higher glucose yield obtained with GC-pretreated bagasse is possibly due to the presence of one hydroxyl group in the GC molecular structure, resulting in more significant biomass delignification and defibrillation, though both solvent pretreatments reduced bagasse particles to a similar extent. The maximum glucan digestibility of GC/glycerol systems was less than that of EC/EG systems, which is likely attributed to glycerol being less effective than EG in biomass delignification and defibrillation. Acidified AC/AG solvent systems were more effective for pretreatment of lignin-containing biomass than MCC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In October 2012, Simone presented her book Architecture for a Free Subjectivity to the University of Michigan, Taubman College of Architecture and Urban Planning. This book explores the architectural significance of Deleuze’s philosophy of subjectivization, and Guattari’s overlooked dialogue on architecture and subjectivity. In doing so, it proposes that subjectivity is no longer the exclusive provenance of human beings, but extends to the architectural, the cinematic, the erotic, and the political. It defines a new position within the literature on Deleuze and architecture, while highlighting the neglected issue of subjectivity in contemporary discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we introduce and discuss the nature of free-play in the context of three open-ended interactive art installation works. We observe the interaction work of situated free-play of the participants in these environments and, building on precedent work, devise a set of sensitising terms derived both from the literature and from what we observe from participants interacting there. These sensitising terms act as guides and are designed to be used by those who experience, evaluate or report on open-ended interactive art. That is, we propose these terms as a common-ground language to be used by participants communicating while in the art work to describe their experience, by researchers in the various stages of research process (observation, coding activity, analysis, reporting, and publication), and by inter-disciplinary researchers working across the fields of HCI and art. This work builds a foundation for understanding the relationship between free-play, open-ended environments, and interactive installations and contributes sensitising terms useful for the HCI community for discussion and analysis of open-ended interactive art works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this research is to report initial experimental results and evaluation of a clinician-driven automated method that can address the issue of misdiagnosis from unstructured radiology reports. Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to disperse information resources and vast amounts of manual processing of unstructured information, a point-of-care accurate diagnosis is often difficult. A rule-based method that considers the occurrence of clinician specified keywords related to radiological findings was developed to identify limb abnormalities, such as fractures. A dataset containing 99 narrative reports of radiological findings was sourced from a tertiary hospital. The rule-based method achieved an F-measure of 0.80 and an accuracy of 0.80. While our method achieves promising performance, a number of avenues for improvement were identified using advanced natural language processing (NLP) techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern copyright law is based on the inescapable assumption that users, given the choice, will free-ride rather than pay for access. In fact, many consumers of cultural works – music, books, films, games, and other works – fundamentally want to support their production. It turns out that humans are motivated to support cultural production not only by extrinsic incentives, but also by social norms of fairness and reciprocity. This article explains how producers across the creative industries have used this insight to develop increasingly sophisticated business models that rely on voluntary payments (including pay-what-you-want schemes) to fund their costs of production. The recognition that users are not always free-riders suggests that current policy approaches to copyright are fundamentally flawed. Because social norms are so important in consumer motivations, the perceived unfairness of the current copyright system undermines the willingness of people to pay for access to cultural goods. While recent copyright reform debate has focused on creating stronger deterrence through enforcement, increasing the perceived fairness and legitimacy of copyright law is likely to be much more effective. The fact that users will sometimes willingly support cultural production also challenges the economic raison d'être of copyright law. This article demonstrates how 'peaceful revolutions' are flipping conventional copyright models and encouraging free-riding through combining incentives and prosocial norms. Because they provide a means to support production without limiting the dissemination of knowledge and culture, there is good reason to believe that these commons-based systems of cultural production can be more efficient, more fair, and more conducive to human flourishing than conventional copyright systems. This article explains what we know about free-riding so far and what work remains to be done to understand the viability and importance of cooperative systems in funding cultural production.