81 resultados para Wrapping
Resumo:
A new concept of semipermeable reservoirs containing co-cultures of cells and supporting microparticles is presented, inspired by the multi-phenotypic cellular environment of bone. Based on the deconstruction of the â stem cell nicheâ , the developed capsules are designed to drive a self-regulated osteogenesis. PLLA microparticles functionalized with collagen I, and a co-culture of adipose stem (ASCs) and endothelial (ECs) cells are immobilized in spherical liquified capsules. The capsules are coated with multilayers of poly(L-lysine), alginate, and chitosan nano-assembled through layer-by-layer. Capsules encapsulating ASCs alone or in a co-culture with ECs are cultured in endothelial medium with or without osteogenic differentiation factors. Results show that osteogenesis is enhanced by the co-encapsulation, which occurs even in the absence of differentiation factors. These findings are supported by an increased ALP activity and matrix mineralization, osteopontin detection, and the up regulation of BMP-2, RUNX2 and BSP. The liquified co-capsules also act as a VEGF and BMP-2 cytokines release system. The proposed liquified capsules might be a valuable injectable self-regulated system for bone regeneration employing highly translational cell sources.
Resumo:
We report on the photophysical properties of single-walled carbon nanotube (SWNT) suspensions In toluene solutions of poly[9,9-dioctylfluorenyl-2,7-diyl](PFO). Steady-state and time-resolved photoluminescence spectroscopy in the near-infrared and visible spectral regions are used to study the interaction of the dispersed SWNTs with the wrapped polymer. Molecular dynamics simulations of the PFO-SWNT hybrids in toluene were carried out to evaluate the energetics of different wrapping geometries. The simulated fluorescence spectra in the visible region were obtained by the quantum chemical ZINDO-CI method, using a sampling of structures obtained from the dynamics trajectories. The tested schemes consider polymer chains aligned along the nanotube axis, where chirality has a minimal effect, or forming helical structures, where a preference for high chiral angles is evidenced. Moreover, toluene affects the polymer structure favoring the helical conformation. Simulations show that the most stable hybrid system is the PFO-wrapped (8,6) nanotube, in agreement with the experimentally observed selectivity.
Resumo:
OBJECTIVE: The aim of this study was to determine the influence of polyvinyl chloride (PVC) wrapping on the performance of two laser fluorescence devices (LF and LFpen) by assessing tooth occlusal surfaces. BACKGROUND DATA: Protection of their tips may influence LF measurements. To date there are no studies evaluating the influence of this protection on the performance of the LFpen on permanent teeth, or comparing it to the original LF device. MATERIALS AND METHODS: One hundred nineteen permanent molars were assessed by two experienced dentists using the LF and the LFpen devices, both with and without PVC wrapping. The teeth were histologically prepared and assessed for caries extension. RESULTS: The LF values with and without PVC wrapping were significantly different. For both LF devices, the sensitivity and accuracy were lower when the PVC wrapping was used. The specificity was statistically significantly higher for the LFpen with PVC. No difference was found between the areas under the ROC curves with and without PVC wrapping. The ICC showed excellent interexaminer agreement. The Bland and Altman method showed a range between the upper and the lower limits of agreement of 63.4 and 57.8 units for the LF device, and 49.4 and 74.2 for the LFpen device, with and without PVC wrapping, respectively. CONCLUSIONS: We found an influence of the PVC wrapping on the performance of the LF and LFpen devices. However, since its influence on detection of occlusal caries lesions is considered for, the use of one PVC layer is suggested to avoid cross-contamination in clinical practice.
Resumo:
Mode of access: Internet.
Resumo:
All single-stranded 'positive-sense' RNA viruses that infect mammalian, insect or plant cells rearrange internal cellular membranes to provide an environment facilitating virus replication. A striking feature of these unique membrane structures is the induction of 70-100 nm vesicles (either free within the cytoplasm, associated with other induced vesicles or bound within a surrounding membrane) harbouring the viral replication complex (RC). Although similar in appearance, the cellular composition of these vesicles appears to vary for different viruses, implying different organelle origins for the intracellular sites of viral RNA replication. Genetic analysis has revealed that induction of these membrane structures can be attributed to a particular viral gene product, usually a non-structural protein. This review will highlight our current knowledge of the formation and composition of virus RCs and describe some of the similarities and differences in RNA-membrane interactions observed between the virus families Flaviviridae and Picornaviridae.
Resumo:
Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^
Resumo:
Poly(vinylidene fluoride) electrospun membranes have been prepared with different NaY zeolite contents up to 32%wt. Inclusion of zeolites induces an increase of average fiber size from ~200 nm in the pure polymer up to ~500 nm in the composite with 16%wt zeolite content. For higher filler contents, a wider distribution of fibers occurs leading to a broader size distributions between the previous fiber size values. Hydrophobicity of the membranes increases from ~115º water contact angle to ~128º with the addition of the filler and is independent on filler content, indicating a wrapping of the zeolite by the polymer. The water contact angle further increases with fiber alignment up to ~137º. Electrospun membranes are formed with ~80 % of the polymer crystalline phase in the electroactive phase, independently on the electrospinning processing conditions or filler content. Viability of MC3T3-E1 cells on the composite membranes after 72 h of cell culture indicates the suitability of the membranes for tissue engineering applications.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Morphological and physiological species-dependent characteristics of the rodent Grueneberg ganglion.
Resumo:
In the mouse, the Grueneberg ganglion (GG) is an olfactory subsystem implicated both in chemo- and thermo-sensing. It is specifically involved in the recognition of volatile danger cues such as alarm pheromones and structurally-related predator scents. No evidence for these GG sensory functions has been reported yet in other rodent species. In this study, we used a combination of histological and physiological techniques to verify the presence of a GG and investigate its function in the rat, hamster, and gerbil comparing with the mouse. By scanning electron microscopy (SEM) and transmitted electron microscopy (TEM), we found isolated or groups of large GG cells of different shapes that in spite of their gross anatomical similarities, display important structural differences between species. We performed a comparative and morphological study focusing on the conserved olfactory features of these cells. We found fine ciliary processes, mostly wrapped in ensheating glial cells, in variable number of clusters deeply invaginated in the neuronal soma. Interestingly, the glial wrapping, the amount of microtubules and their distribution in the ciliary processes were different between rodents. Using immunohistochemistry, we were able to detect the expression of known GG proteins, such as the membrane guanylyl cyclase G and the cyclic nucleotide-gated channel A3. Both the expression and the subcellular localization of these signaling proteins were found to be species-dependent. Calcium imaging experiments on acute tissue slice preparations from rodent GG demonstrated that the chemo- and thermo-evoked neuronal responses were different between species. Thus, GG neurons from mice and rats displayed both chemo- and thermo-sensing, while hamsters and gerbils showed profound differences in their sensitivities. We suggest that the integrative comparison between the structural morphologies, the sensory properties, and the ethological contexts supports species-dependent GG features prompted by the environmental pressure.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).