944 resultados para Hazard-Based Models
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Strawberries represent the main source of ellagic acid derivatives in the Brazilian diet, corresponding to more than 50% of all phenolic compounds found in the fruit. There is a particular interest in the determination of the ellagic acid content in fruits because of possible chemopreventive benefits. In the present study, the potential health benefits of purified ellagitannins from strawberries were evaluated in relation to the antiproliferative activity and in vitro inhibition of alpha-amylase, alpha-glucosidase, and angiotensin I-converting enzyme (ACE) relevant for potential management of hyperglycemia and hypertension. Therefore, a comparison among ellagic acid, purified ellagitannins, and a strawberry extract was done to evaluate the possible synergistic effects of phenolics. In relation to the antiproliferative activity, it was observed that ellagic acid had the highest percentage inhibition of cell proliferation. The strawberry extract had lower efficacy in inhibiting the cell proliferation, indicating that in the case of this fruit there is no synergism. Purified ellagitannins had high alpha-amylase and ACE inhibitory activities. However, these compounds had low alpha-glucosidase inhibitory activity. These results suggested that the ellagitannins and ellagic acid have good potential for the management of hyperglycemia and hypertension linked to type 2 diabetes. However, further studies with animal and human models are needed to advance the in vitro assay-based biochemical rationale from this study.
Resumo:
Local food diversity and traditional crops are essential for cost-effective management of the global epidemic of type 2 diabetes and associated complications of hypertension. Water and 12% ethanol extracts of native Peruvian fruits such as Lucuma (Pouteria lucuma), Pacae (Inga feuille), Papayita arequipena (Carica pubescens), Capuli (Prunus capuli), Aguaymanto (Physalis peruviana), and Algarrobo (Prosopis pallida) were evaluated for total phenolics, antioxidant activity based on 2, 2-diphenyl-1-picrylhydrazyl radical scavenging assay, and functionality such as in vitro inhibition of alpha-amylase, alpha-glucosidase, and angiotensin I-converting enzyme (ACE) relevant for potential management of hyperglycemia and hypertension linked to type 2 diabetes. The total phenolic content ranged from 3.2 (Aguaymanto) to 11.4 (Lucuma fruit) mg/g of sample dry weight. A significant positive correlation was found between total phenolic content and antioxidant activity for the ethanolic extracts. No phenolic compound was detected in Lucuma (fruit and powder) and Pacae. Aqueous extracts from Lucuma and Algarrobo had the highest alpha-glucosidase inhibitory activities. Papayita arequipena and Algarrobo had significant ACE inhibitory activities reflecting antihypertensive potential. These in vitro results point to the excellent potential of Peruvian fruits for food-based strategies for complementing effective antidiabetes and antihypertension solutions based on further animal and clinical studies.
Resumo:
Commonly consumed carbohydrate sweeteners derived from sugar cane, palm, and corn (syrups) were investigated to determine their potential to inhibit key enzymes relevant to Type 2 diabetes and hypertension based on the total phenolic content and antioxidant activity using in vitro models. Among sugar cane derivatives, brown sugars showed higher antidiabetes potential than white sugars; nevertheless, no angiotensin I-converting enzyme (ACE) inhibition was detected in both sugar classes. Brown sugar from Peru and Mauritius (dark muscovado) had the highest total phenolic content and 1,1-diphenyl-2-picrylhydrazyl radical scavenging activity, which correlated with a moderate inhibition of yeast alpha-glucosidase without showing a significant effect on porcine pancreatic alpha-amylase activity. In addition, chlorogenic acid quantified by high-performance liquid chromatography was detected in these sugars (128 +/- 6 and 144 +/- 2 mu g/g of sample weight, respectively). Date sugar exhibited high alpha-glucosidase, alpha-amylase, and ACE inhibitory activities that correlated with high total phenolic content and antioxidant activity. Neither phenolic compounds or antioxidant activity was detected in corn syrups, indicating that nonphenolic factors may be involved in their significant ability to inhibit alpha-glucosidase, alpha-amylase, and ACE. This study provides a strong biochemical rationale for further in vivo studies and useful information to make better dietary sweetener choices for Type 2 diabetes and hypertension management.
Resumo:
The mass function of cluster-size halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat Lambda CDM model. The comparison with Lambda CDM is performed by a two-step process. First, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using supernovae type Ia, the cosmic microwave background shift parameter, and baryon acoustic oscillations. Apart from a braneworld inspired cosmology, it is found that the derived Hubble relation of the remaining models reproduces the Lambda CDM results approximately with the same degree of statistical confidence. Second, in order to attempt to distinguish the different dark energy models from the expectations of Lambda CDM, we analyze the predicted cluster-size halo redshift distribution on the basis of two future cluster surveys: (i) an X-ray survey based on the eROSITA satellite, and (ii) a Sunayev-Zeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the Lambda CDM cosmology, while the predictions of 4 models are statistically equivalent to those of the Lambda CDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggests that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.
Resumo:
The HR Del nova remnant was observed with the IFU-GMOS at Gemini North. The spatially resolved spectral data cube was used in the kinematic, morphological, and abundance analysis of the ejecta. The line maps show a very clumpy shell with two main symmetric structures. The first one is the outer part of the shell seen in H alpha, which forms two rings projected in the sky plane. These ring structures correspond to a closed hourglass shape, first proposed by Harman & O'Brien. The equatorial emission enhancement is caused by the superimposed hourglass structures in the line of sight. The second structure seen only in the [O III] and [N II] maps is located along the polar directions inside the hourglass structure. Abundance gradients between the polar caps and equatorial region were not found. However, the outer part of the shell seems to be less abundant in oxygen and nitrogen than the inner regions. Detailed 2.5-dimensional photoionization modeling of the three-dimensional shell was performed using the mass distribution inferred from the observations and the presence of mass clumps. The resulting model grids are used to constrain the physical properties of the shell as well as the central ionizing source. A sequence of three-dimensional clumpy models including a disk-shaped ionization source is able to reproduce the ionization gradients between polar and equatorial regions of the shell. Differences between shell axial ratios in different lines can also be explained by aspherical illumination. A total shell mass of 9 x 10(-4) M(circle dot) is derived from these models. We estimate that 50%-70% of the shell mass is contained in neutral clumps with density contrast up to a factor of 30.
Resumo:
Context. Tight binaries discovered in young, nearby associations are ideal targets for providing dynamical mass measurements to test the physics of evolutionary models at young ages and very low masses. Aims. We report the binarity of TWA22 for the first time. We aim at monitoring the orbit of this young and tight system to determine its total dynamical mass using an accurate distance determination. We also intend to characterize the physical properties (luminosity, effective temperature, and surface gravity) of each component based on near-infrared photometric and spectroscopic observations. Methods. We used the adaptive-optics assisted imager NACO to resolve the components, to monitor the complete orbit and to obtain the relative near-infrared photometry of TWA22 AB. The adaptive-optics assisted integral field spectrometer SINFONI was also used to obtain medium-resolution (R(lambda) = 1500-2000) spectra in JHK bands. Comparison with empirical and synthetic librairies were necessary for deriving the spectral type, the effective temperature, and the surface gravity for each component of the system. Results. Based on an accurate trigonometric distance (17.5 +/- 0.2 pc) determination, we infer a total dynamical mass of 220 +/- 21 M(Jup) for the system. From the complete set of spectra, we find an effective temperature T(eff) = 2900(-200)(+200) K for TWA22A and T(eff) = 2900(-100)(+200) for TWA22 B and surface gravities between 4.0 and 5.5 dex. From our photometry and an M6 +/- 1 spectral type for both components, we find luminosities of log(L/L(circle dot)) = -2.11 +/- 0.13 dex and log(L/L(circle dot)) = -2.30 +/- 0.16 dex for TWA22 A and B, respectively. By comparing these parameters with evolutionary models, we question the age and the multiplicity of this system. We also discuss a possible underestimation of the mass predicted by evolutionary models for young stars close to the substellar boundary.
Resumo:
Umbilical cord mesenchymal stromal cells (MSC) have been widely investigated for cell-based therapy studies as an alternative source to bone marrow transplantation. Umbilical cord tissue is a rich source of MSCs with potential to derivate at least muscle, cartilage, fat, and bone cells in vitro. The possibility to replace the defective muscle cells using cell therapy is a promising approach for the treatment of progressive muscular dystrophies (PMDs), independently of the specific gene mutation. Therefore, preclinical studies in different models of muscular dystrophies are of utmost importance. The main objective of the present study is to evaluate if umbilical cord MSCs have the potential to reach and differentiate into muscle cells in vivo in two animal models of PMDs. In order to address this question we injected (1) human umbilical cord tissue (hUCT) MSCs into the caudal vein of SJL mice; (2) hUCT and canine umbilical cord vein (cUCV) MSCs intra-arterially in GRMD dogs. Our results here reported support the safety of the procedure and indicate that the injected cells could engraft in the host muscle in both animal models but could not differentiate into muscle cells. These observations may provide important information aiming future therapy for muscular dystrophies.
Resumo:
This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.
Resumo:
Southeastern Brazil has seen dramatic landscape modifications in recent decades, due to expansion of agriculture and urban areas; these changes have influenced the distribution and abundance of vertebrates. We developed predictive models of ecological and spatial distributions of capybaras (Hydrochoerus hydrochaeris) using ecological niche modeling. Most Occurrences of capybaras were in flat areas with water bodies Surrounded by sugarcane and pasture. More than 75% of the Piracicaba River basin was estimated as potentially habitable by capybara. The models had low omission error (2.3-3.4%), but higher commission error (91.0-98.5%); these ""model failures"" seem to be more related to local habitat characteristics than to spatial ones. The potential distribution of capybaras in the basin is associated with anthropogenic habitats, particularly with intensive land use for agriculture.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
The crossflow filtration process differs of the conventional filtration by presenting the circulation flow tangentially to the filtration surface. The conventional mathematical models used to represent the process have some limitations in relation to the identification and generalization of the system behaviour. In this paper, a system based on artificial neural networks is developed to overcome the problems usually found in the conventional mathematical models. More specifically, the developed system uses an artificial neural network that simulates the behaviour of the crossflow filtration process in a robust way. Imprecisions and uncertainties associated with the measurements made on the system are automatically incorporated in the neural approach. Simulation results are presented to justify the validity of the proposed approach. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The productivity associated with commonly available disassembly methods today seldomly makes disassembly the preferred end-of-life solution for massive take back product streams. Systematic reuse of parts or components, or recycling of pure material fractions are often not achievable in an economically sustainable way. In this paper a case-based review of current disassembly practices is used to analyse the factors influencing disassembly feasibility. Data mining techniques were used to identify major factors influencing the profitability of disassembly operations. Case characteristics such as involvement of the product manufacturer in the end-of-life treatment and continuous ownership are some of the important dimensions. Economic models demonstrate that the efficiency of disassembly operations should be increased an order of magnitude to assure the competitiveness of ecologically preferred, disassembly oriented end-of-life scenarios for large waste of electric and electronic equipment (WEEE) streams. Technological means available to increase the productivity of the disassembly operations are summarized. Automated disassembly techniques can contribute to the robustness of the process, but do not allow to overcome the efficiency gap if not combined with appropriate product design measures. Innovative, reversible joints, collectively activated by external trigger signals, form a promising approach to low cost, mass disassembly in this context. A short overview of the state-of-the-art in the development of such self-disassembling joints is included. (c) 2008 CIRP.