101 resultados para flame kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scaffolding is an essential issue in tissue engineering and scaffolds should answer certain essential criteria: biocompatibility, high porosity, and important pore interconnectivity to facilitate cell migration and fluid diffusion. In this work, a modified solvent castingparticulate leaching out method is presented to produce scaffolds with spherical and interconnected pores. Sugar particles (200–300 lm and 300–500 lm) were poured through a horizontal Meker burner flame and collected below the flame. While crossing the high temperature zone, the particles melted and adopted a spherical shape. Spherical particles were compressed in plastic mold. Then, poly-L-lactic acid solution was cast in the sugar assembly. After solvent evaporation, the sugar was removed by immersing the structure into distilled water for 3 days. The obtained scaffolds presented highly spherical interconnected pores, with interconnection pathways from 10 to 100 lm. Pore interconnection was obtained without any additional step. Compression tests were carried out to evaluate the scaffold mechanical performances. Moreover, rabbit bone marrow mesenchymal stem cells were found to adhere and to proliferate in vitro in the scaffold over 21 days. This technique produced scaffold with highly spherical and interconnected pores without the use of additional organic solvents to leach out the porogen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The occurrence of extreme movements in the spot price of electricity represents a significant source of risk to retailers. A range of approaches have been considered with respect to modelling electricity prices; these models, however, have relied on time-series approaches, which typically use restrictive decay schemes placing greater weight on more recent observations. This study develops an alternative, semi-parametric method for forecasting, which uses state-dependent weights derived from a kernel function. The forecasts that are obtained using this method are accurate and therefore potentially useful to electricity retailers in terms of risk management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivation: Gene silencing, also called RNA interference, requires reliable assessment of silencer impacts. A critical task is to find matches between silencer oligomers and sites in the genome, in accordance with one-to-many matching rules (G-U matching, with provision for mismatches). Fast search algorithms are required to support silencer impact assessments in procedures for designing effective silencer sequences.Results: The article presents a matching algorithm and data structures specialized for matching searches, including a kernel procedure that addresses a Boolean version of the database task called the skyline search. Besides exact matches, the algorithm is extended to allow for the location-specific mismatches applicable in plants. Computational tests show that the algorithm is significantly faster than suffix-tree alternatives. © The Author 2010. Published by Oxford University Press. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The period of developmental vulnerability to toxicants begins at conception and extends through gestation, parturition, infanthood and childhood to adolescence. The concern is that children: (1) may experience quantitatively and qualitatively different exposures, and (2) may have different sensitivity to chemical pollutants. Traditional toxicological studies are inappropriate for assessing the results of chronic exposure at very low levels during critical periods of development. This paper will discuss (1) the health effects associated with exposure to selected emerging organic pollutants, including brominated flame retardants, perfluorinated compounds, organophosphate pesticides and bisphenol A; (2) difficulties in monitoring these substances in children, and (3) suggest techniques and strategies for overcoming these difficulties. Such biomonitoring data can be used to identify where policies should be directed in order to reduce exposure, and to document policies that have successfully reduced exposure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION An important treatment goal for burn wounds is to promote early wound closure. This study identifies factors associated with delayed re-epithelialization following pediatric burn. METHODS Data were collected from August 2011 to August 2012, at a pediatric tertiary burn center. A total of 106 burn wounds were analyzed from 77 participants aged 4-12 years. Percentage of wound re-epithelialization at each dressing change was calculated using Visitrak. Mixed effect regression analysis was performed to identify the demographic factors, wound and clinical characteristics associated with delayed re-epithelialization. RESULTS Burn depth determined by laser Doppler imaging, ethnicity, pain scores, total body surface area (TBSA), mechanism of injury and days taken to present to the burn center were significant predictors of delayed re-epithelialization, accounting for 69% of variance. Flame burns delayed re-epithelialization by 39% compared to all other mechanisms (p=0.003). When initial presentation to the burn center was on day 5, burns took an average of 42% longer to re-epithelialize, compared to those who presented on day 2 post burn (p<0.000). Re-epithelialization was delayed by 14% when pain scores were reported as 10 (on the FPS-R), compared to 4 on the first dressing change (p=0.015) for children who did not receive specialized preparation/distraction intervention. A larger TBSA was also a predictor of delayed re-epithelialization (p=0.030). Darker skin complexion re-epithelialized 25% faster than lighter skin complexion (p=0.001). CONCLUSIONS Burn depth, mechanism of injury and TBSA are always considered when developing the treatment and surgical management plan for patients with burns. This study identifies other factors influencing re-epithelialization, which can be controlled by the treating team, such as effective pain management and rapid referral to a specialized burn center, to achieve optimal outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this review is to showcase the present capabilities of ambient sampling and ionisation technologies for the analysis of polymers and polymer additives by mass spectrometry (MS) while simultaneously highlighting their advantages and limitations in a critical fashion. To qualify as an ambient ionisation technique, the method must be able to probe the surface of solid or liquid samples while operating in an open environment, allowing a variety of sample sizes, shapes, and substrate materials to be analysed. The main sections of this review will be guided by the underlying principle governing the desorption/extraction step of the analysis; liquid extraction, laser ablation, or thermal desorption, and the major component investigated, either the polymer itself or exogenous compounds (additives and contaminants) present within or on the polymer substrate. The review will conclude by summarising some of the challenges these technologies still face and possible directions that would further enhance the utility of ambient ionisation mass spectrometry as a tool for polymer analysis. (C) 2013 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polybrominated diphenyl ethers (PBDEs) are compounds that are used as flame retardants. Human exposure is suggested to be via food, dust and air. An assessment of PBDE exposure via indoor environments using samples of air, dust and surface wipes from eight sites in South East Queensland, Australia was conducted. For indoor air, ΣPBDEs ranged from 0.5 -179 pg/m3 for homes and 15 - 487 pg/m3 for offices. In dust, ΣPBDEs ranged from 87 - 733 ng/g dust and 583 - 3070 ng/g dust in homes and offices, respectively. PBDEs were detected on 9 out of 10 surfaces sampled and ranged from non-detectable to 5985 pg/cm2. Overall, the congener profiles for air and dust were dominated by BDE-209. This study demonstrated that PBDEs are ubiquitous in the indoor environments of selected buildings in South East Queensland and suggest the need for detailed assessment of PBDE concentrations using more sites to further investigate the factors influencing PBDE exposure in Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the experimental testing of oxygen compatible ceramic matrix composite porous injectors in a nominally two-dimensional hydrogen fuelled and oxygen enriched radical farming scramjet in the T4 shock tunnel facility. All experiments were performed at a dynamic pressure of 146 kPa, an equivalent flight Mach number of 9.7, a stagnation pressure and enthalpy of 40MPa and 4.3 MJ/kg respectively and at a fuelling condition that resulted in an average equivalence ratio of 0.472. Oxygen was pre-mixed with the fuel prior to injection to achieve enrichment percentages of approximately 13%, 15% and 17%. These levels ensured that the hydrogen-oxidiser mix injected into the engine always remained too fuel rich to sustain a flame without any additional mixing with the captured air. Addition of pre-mixed oxygen with the fuel was found to significantly alter the performance of the engine; enhancing both combustion and ignition and converting a previously observed limited combustion condition into one with sustained and noticeable combustion induced pressure rise. Increases in the enrichment percentage lead to further increases in combustion levels and acted to reduce ignition lengths within the engine. Suppressed combustion runs, where a nitrogen test gas was used, confirmed that the pressure rise observed in these experiments as attributed to the oxygen enrichment and not associated with the increased mass injected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time series classification has been extensively explored in many fields of study. Most methods are based on the historical or current information extracted from data. However, if interest is in a specific future time period, methods that directly relate to forecasts of time series are much more appropriate. An approach to time series classification is proposed based on a polarization measure of forecast densities of time series. By fitting autoregressive models, forecast replicates of each time series are obtained via the bias-corrected bootstrap, and a stationarity correction is considered when necessary. Kernel estimators are then employed to approximate forecast densities, and discrepancies of forecast densities of pairs of time series are estimated by a polarization measure, which evaluates the extent to which two densities overlap. Following the distributional properties of the polarization measure, a discriminant rule and a clustering method are proposed to conduct the supervised and unsupervised classification, respectively. The proposed methodology is applied to both simulated and real data sets, and the results show desirable properties.