137 resultados para Input Modalities
Resumo:
Objective: To evaluate the adhesion of the endodontic sealers Epiphany, Apexit Plus, and AH Plus to root canal dentin submitted to different surface treatments, by using the push-out test. Methods: One hundred twenty-eight root cylinders obtained from maxillary canines were embedded in acrylic resin, had the canals prepared, and were randomly assigned to four groups (n = 32), according to root dentin treatment: (I) distilled water (control), (II) 17% EDTAC, (III) 1% NaOCl and (IV) Er:YAG laser with 16-Hz, 400-mJ input (240-mJ output) and 0.32-J/cm(2) energy density. Each group was divided into four subgroups (n = 8) filled with Epiphany (either dispensed from the automix syringe supplied by the manufacturer or prepared by hand mixing), Apexit Plus, or AH Plus. Data (MPa) were analyzed by ANOVA and Tukey's test. Results: A statistically significant difference (p < 0.01) was found among the root-canal sealers, except for the Epiphany subgroups, which had statistically similar results to each other (p > 0.01): AH Plus (4.77 +/- 0.85), Epiphany/hand mixed (3.06 +/- 1.34), Epiphany/automix syringe (2.68 +/- 1.35), and Apexit Plus (1.22 +/- 0.33). A significant difference (p < 0.01) was found among the dentin surface treatments. The highest adhesion values were obtained with AH Plus when root dentin was treated with Er: YAG laser and 17% EDTAC. Epiphany sealer presented the lowest adhesion values to root dentin treated with 17% EDTAC. Conclusions: The resin-based sealers had different adhesive behaviors, depending on the treatment of root canal walls. The mode of preparation of Epiphany (automix syringe or hand mixing) did not influence sealer adhesion to root dentin.
Resumo:
The canine model provides a large animal system to evaluate many treatment modalities using stem cells (SCs). However, only bone marrow ( BM) protocols have been widely used in dogs for preclinical approaches. BM donation consists of an invasive procedure and the number and differentiation potential of its mesenchymal stem cells (MSCs) decline with age. More recently, umbilical cord was introduced as an alternative source to BM since it is obtained from a sample that is routinely discarded. Here, we describe the isolation of MSCs from canine umbilical cord vein (cUCV). These cells can be obtained from every cord received and grow successfully in culture. Their multipotent plasticity was demonstrated by their capacity to differentiate in adipocytic, chondrocytic, and osteocytic lineages. Furthermore, our results open possibilities to use cUCV cells in preclinical trials for many well-characterized canine model conditions homologs to human diseases.
Resumo:
Aims. We investigate the time-varying patterns in line profiles, V/R, and radial velocity of the Be star HD 173948 (lambda Pavonis). Methods. Time series analyses of radial velocity, V/R, and line profiles of He I, Fe II, and Si II were performed with the Cleanest algorithm. An estimate of the stellar rotation frequency was derived from the stellar mass and radius in the Roche limit by adopting an aspect angle i derived from the fittings of non-LTE model spectra affected by rotation. The projected rotation velocity, necessary as input for the spectral synthesis procedure, was evaluated from the Fourier transform of the rotation profiles of all neutral helium lines in the optical range. Results. Emission episodes in Balmer and He i lines, as well as V/R cyclic variations, are reported for spectra observed in year 1999, followed by a relatively quiescent phase (2000) and then again a new active epoch (2001). From time series analyses of line profiles, radial velocities, and V/R ratios, four signals with high confidence levels are detected: nu(1) = 0.17 +/- 0.02, nu(2) = 0.49 +/- 0.05, nu(3) = 0.82 +/- 0.03, and nu(4) = 1.63 +/- 0.04 c/d. We interpret nu 4 as a non-radial pulsation g-mode, nu 3 as a signal related to the orbital timescale of ejected material, which is near the theoretical rotation frequency 0.81 c/d inferred from the fitting of the models taken into account for gravity darkening. The signals nu(1) and nu(2) are viewed as aliases of nu(3) and nu(4).
Resumo:
Context. About 2/3 of the Be stars present the so-called V/R variations, a phenomenon characterized by the quasi-cyclic variation in the ratio between the violet and red emission peaks of the HI emission lines. These variations are generally explained by global oscillations in the circumstellar disk forming a one-armed spiral density pattern that precesses around the star with a period of a few years. Aims. This paper presents self-consistent models of polarimetric, photometric, spectrophotometric, and interferometric observations of the classical Be star zeta Tauri. The primary goal is to conduct a critical quantitative test of the global oscillation scenario. Methods. Detailed three-dimensional, NLTE radiative transfer calculations were carried out using the radiative transfer code HDUST. The most up-to-date research on Be stars was used as input for the code in order to include a physically realistic description for the central star and the circumstellar disk. The model adopts a rotationally deformed, gravity darkened central star, surrounded by a disk whose unperturbed state is given by a steady-state viscous decretion disk model. It is further assumed that this disk is in vertical hydrostatic equilibrium. Results. By adopting a viscous decretion disk model for zeta Tauri and a rigorous solution of the radiative transfer, a very good fit of the time-average properties of the disk was obtained. This provides strong theoretical evidence that the viscous decretion disk model is the mechanism responsible for disk formation. The global oscillation model successfully fitted spatially resolved VLTI/AMBER observations and the temporal V/R variations in the H alpha and Br gamma lines. This result convincingly demonstrates that the oscillation pattern in the disk is a one-armed spiral. Possible model shortcomings, as well as suggestions for future improvements, are also discussed.
Resumo:
Context. The cosmic time around the z similar to 1 redshift range appears crucial in the cluster and galaxy evolution, since it is probably the epoch of the first mature galaxy clusters. Our knowledge of the properties of the galaxy populations in these clusters is limited because only a handful of z similar to 1 clusters are presently known. Aims. In this framework, we report the discovery of a z similar to 0.87 cluster and study its properties at various wavelengths. Methods. We gathered X-ray and optical data (imaging and spectroscopy), and near and far infrared data (imaging) in order to confirm the cluster nature of our candidate, to determine its dynamical state, and to give insight on its galaxy population evolution. Results. Our candidate structure appears to be a massive z similar to 0.87 dynamically young cluster with an atypically high X-ray temperature as compared to its X-ray luminosity. It exhibits a significant percentage (similar to 90%) of galaxies that are also detected in the 24 mu m band. Conclusions. The cluster RXJ1257.2+4738 appears to be still in the process of collapsing. Its relatively high temperature is probably the consequence of significant energy input into the intracluster medium besides the regular gravitational infall contribution. A significant part of its galaxies are red objects that are probably dusty with on-going star formation.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.
Resumo:
Measurements of double-helicity asymmetries in inclusive hadron production in polarized p + p collisions are sensitive to helicity-dependent parton distribution functions, in particular, to the gluon helicity distribution, Delta g. This study focuses on the extraction of the double-helicity asymmetry in eta production ((p) over right arrow + (p) over right arrow -> eta + X), the eta cross section, and the eta/pi(0) cross section ratio. The cross section and ratio measurements provide essential input for the extraction of fragmentation functions that are needed to access the helicity-dependent parton distribution functions.
Resumo:
We propose a physically transparent analytic model of astrophysical S factors as a function of a center-of-mass energy E of colliding nuclei (below and above the Coulomb barrier) for nonresonant fusion reactions. For any given reaction, the S(E) model contains four parameters [two of which approximate the barrier potential, U(r)]. They are easily interpolated along many reactions involving isotopes of the same elements; they give accurate practical expressions for S(E) with only several input parameters for many reactions. The model reproduces the suppression of S(E) at low energies (of astrophysical importance) due to the shape of the low-r wing of U(r). The model can be used to reconstruct U(r) from computed or measured S(E). For illustration, we parametrize our recent calculations of S(E) (using the Sao Paulo potential and the barrier penetration formalism) for 946 reactions involving stable and unstable isotopes of C, O, Ne, and Mg (with nine parameters for all reactions involving many isotopes of the same elements, e. g., C+O). In addition, we analyze astrophysically important (12)C+(12)C reaction, compare theoretical models with experimental data, and discuss the problem of interpolating reliably known S(E) values to low energies (E less than or similar to 2-3 MeV).
Resumo:
We investigate the performance of a variant of Axelrod's model for dissemination of culture-the Adaptive Culture Heuristic (ACH)-on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N(1/4) so that the number of agents must increase with the fourth power of the problem size, N proportional to F(4), to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F(6) which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Resumo:
Background: Identifying local similarity between two or more sequences, or identifying repeats occurring at least twice in a sequence, is an essential part in the analysis of biological sequences and of their phylogenetic relationship. Finding such fragments while allowing for a certain number of insertions, deletions, and substitutions, is however known to be a computationally expensive task, and consequently exact methods can usually not be applied in practice. Results: The filter TUIUIU that we introduce in this paper provides a possible solution to this problem. It can be used as a preprocessing step to any multiple alignment or repeats inference method, eliminating a possibly large fraction of the input that is guaranteed not to contain any approximate repeat. It consists in the verification of several strong necessary conditions that can be checked in a fast way. We implemented three versions of the filter. The first is simply a straightforward extension to the case of multiple sequences of an application of conditions already existing in the literature. The second uses a stronger condition which, as our results show, enable to filter sensibly more with negligible (if any) additional time. The third version uses an additional condition and pushes the sensibility of the filter even further with a non negligible additional time in many circumstances; our experiments show that it is particularly useful with large error rates. The latter version was applied as a preprocessing of a multiple alignment tool, obtaining an overall time (filter plus alignment) on average 63 and at best 530 times smaller than before (direct alignment), with in most cases a better quality alignment. Conclusion: To the best of our knowledge, TUIUIU is the first filter designed for multiple repeats and for dealing with error rates greater than 10% of the repeats length.
Resumo:
Seagrass beds have higher biomass, abundance, diversity and productivity of benthic organisms than unvegetated sediments. However, to date most studies have analysed only the macrofaunal component and ignored the abundant meiofauna present in seagrass meadows. This study was designed to test if meiobenthic communities, especially the free-living nematodes, differed between seagrass beds and unvegetated sediments. Sediment samples from beds of the eelgrass Zostera capricorni and nearby unvegetated sediments were collected in three estuaries along the coast of New South Wales, Australia. Results showed that sediments below the seagrass were finer, with a higher content of organic material and were less oxygenated than sediments without seagrass. Univariate measures of the fauna (i.e. abundance, diversity and taxa richness of total meiofauna and nematode assemblages) did not differ between vegetated and unvegetated sediments. However multivariate analysis of meiofaunal higher taxa showed significant differences between the two habitats, largely due to the presence and absence of certain taxa. Amphipods, tanaidacea, ostracods, hydrozoans and isopods occurred mainly in unvegetated sediments, while kinorhyncs, polychaetes, gastrotrichs and turbellarians were more abundant in vegetated sediments. Regarding the nematode assemblages, 32.4% of the species were restricted to Z. capricorni and 25% only occurred in unvegetated sediments, this suggests that each habitat is characterized by a particular suite of species. Epistrate feeding nematodes were more abundant in seagrass beds, and it is suggested that they graze on the microphytobenthos which accumulates underneath the seagrass. Most of the genera that characterized these estuarine unvegetated sediments are also commonly found on exposed sandy beaches. This may be explained by the fact that Australian estuaries have very little input of freshwater and experience marine conditions for most of the year. This study demonstrates that the seagrass and unvegetated sediments have discrete meiofaunal communities, with little overlap in species composition. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Lignin phenols were measured in the sediments of Sepitiba Bay, Rio de Janeiro, Brazil and in bedload sediments and suspended sediments of the four major fluvial inputs to the bay: Sao Francisco and Guandu Channels and the Guarda and Cacao Rivers. Fluvial suspended lignin yields (Sigma 8 3.5-14.6 mgC 10 g dw(-1)) vary little between the wet and dry seasons and are poorly correlated with fluvial chlorophyll concentrations (0.8-50.2 mu gC L(-1)). Despite current land use practices that favor grassland agriculture or industrial uses, fluvial lignin compositions are dominated by a degraded leaf-sourced material. The exception is the Guarda River, which has a slight influence from grasses. The Lignin Phenol Vegetation Index, coupled with acid/aldehyde and 3.5 Db/V ratios, indicate that degraded leaf-derived phenols are also the primary preserved lignin component in the bay. The presence of fringe Typha sp. and Spartina sp. grass beds surrounding portions of the Bay are not reflected in the lignin signature. Instead, lignin entering the bay appears to reflect the erosion of soils containing a degraded signature from the former Atlantic rain forest that once dominated the watershed, instead of containing a significant signature derived from current agricultural uses. A three-component mixing model using the LPVI, atomic N:C ratios, and stable carbon isotopes (which range between -26.8 and -21.8 parts per thousand) supports the hypothesis that fluvial inputs to the bay are dominated by planktonic matter (78% of the input), with lignin dominated by leaf (14% of the input) over grass (6%). Sediments are composed of a roughly 50-50 mixture of autochthonous material and terrigenous material, with lignin being primarily sourced from leaf. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Biofuels are both a promising solution to global warming mitigation and a potential contributor to the problem. Several life cycle assessments of bioethanol have been conducted to address these questions. We performed a synthesis of the available data on Brazilian ethanol production focusing on greenhouse gas (GHG) emissions and carbon (C) sinks in the agricultural and industrial phases. Emissions of carbon dioxide (CO(2)) from fossil fuels, methane (CH(4)) and nitrous oxide (N(2)O) from sources commonly included in C footprints, such as fossil fuel usage, biomass burning, nitrogen fertilizer application, liming and litter decomposition were accounted for. In addition, black carbon (BC) emissions from burning biomass and soil C sequestration were included in the balance. Most of the annual emissions per hectare are in the agricultural phase, both in the burned system (2209 out of a total of 2398 kg C(eq)), and in the unburned system (559 out of 748 kg C(eq)). Although nitrogen fertilizer emissions are large, 111 kg C(eq) ha-1 yr-1, the largest single source of emissions is biomass burning in the manual harvest system, with a large amount of both GHG (196 kg C(eq) ha-1 yr-1). and BC (1536 kg C(eq) ha-1 yr-1). Besides avoiding emissions from biomass burning, harvesting sugarcane mechanically without burning tends to increase soil C stocks, providing a C sink of 1500 kg C ha-1 yr-1 in the 30 cm layer. The data show a C output: input ratio of 1.4 for ethanol produced under the conventionally burned and manual harvest compared with 6.5 for the mechanized harvest without burning, signifying the importance of conservation agricultural systems in bioethanol feedstock production.
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.