958 resultados para Classical orthogonal polynomials of a discrete variable
Resumo:
Steatosis is diagnosed on the basis of the macroscopic aspect of the liver evaluated by the surgeon at the time of organ extraction or by means of a frozen biopsy. In the present study, the applicability of laser-induced fluorescence (LIF) spectroscopy was investigated as a method for the diagnosis of different degrees of steatosis experimentally induced in rats. Rats received a high-lipid diet for different periods of time. The animals were divided into groups according to the degree of induced steatosis diagnosis by histology. The concentration of fat in the liver was correlated with LIF by means of the steatosis fluorescence factor (SFF). The histology classification, according to liver fat concentration was, Severe Steatosis, Moderate Steatosis, Mild Steatosis and Control (no liver steatosis). Fluorescence intensity could be directly correlated with fat content. It was possible to estimate an average of fluorescence intensity variable by means of different confidence intervals (P=95%) for each steatosis group. SFF was significantly higher in the Severe Steatosis group (P < 0.001) compared with the Moderate Steatosis, Mild Steatosis and Control groups. The various degrees of steatosis could be directly correlated with SFF. LIF spectroscopy proved to be a method capable of identifying the degree of hepatic steatosis in this animal model, and has the potential of clinical application for non-invasive evaluation of the degree of steatosis.
Resumo:
Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name N objects using H words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the limits of large N and H, which coincides with the result of the classical occupancy problem of randomly assigning N objects to H words.
Resumo:
P>In the context of either Bayesian or classical sensitivity analyses of over-parametrized models for incomplete categorical data, it is well known that prior-dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over-parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior-dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over-parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.Resume Que ce soit dans un cadre Bayesien ou classique, il est bien connu que la surparametrisation, dans les modeles pour donnees categorielles incompletes, peut conduire a des conclusions erronees. Cependant, certains auteurs persistent a negliger les problemes lies a la presence de parametres non identifies. Nous passons en revue la litterature dans ce domaine, et considerons quelques exemples surparametres simples dans lesquels les elements subjectifs influencent de facon non negligeable les resultats, independamment de la taille des echantillons. Plus precisement, nous montrons comment des a priori consideres comme peu ou non-informatifs peuvent se reveler extremement informatifs en ce qui concerne les parametres non identifies, et que le recours a des modeles surparametres peut avoir sur les conclusions finales un impact considerable. Ceci suggere un examen tres attentif de l`impact potentiel des a priori.
Resumo:
Quadratic alternative superalgebras are introduced and their super-identities and central functions on one odd generator are described. As a corollary, all multilinear skew-symmetric identities and central polynomials of octonions are classified. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Urban particulate matter (UPM) contributes to lung cancer incidence. Here, we have studied the mutagenic activity and DNA adduct-forming ability of fractionated UPM extractable organic matter (EOM). UPM was collected with a high-volume sampler in June 2004 at two sites, one at street level adjacent to a roadway and the other inside a park within the urban area of the city of Sao Paulo, Brazil. UPM was extracted using dichloromethane, and the resulting EOM was separated by HPLC to obtain PAH, nitro-PAH, and oxy-PAH fractions which were tested for mutagenicity with the Salmonella strains TA98 and YG1041 with and without S9 metabolic activation. The PAH fraction from both sites showed negligible mutagenic activity in both strains. The highest mutagenic activity was found for the nitro-PAH fraction using YG1041 without metabolic activation; however, results were comparable for both sites. The nitro-PAH and oxy-PAH fractions were incubated with calf thymus DNA under reductive conditions appropriate for the activation of nitro aromatic compounds, then DNA adduct patterns and levels were determined with thin-layer chromatography (TLC) (32)p-postlabeling method using two enrichment procedures-nuclease PI digestion and butanol extraction. Reductively activated fractions from both sites produced diagonal radioactive zones (DRZ) of putative aromatic DNA adducts on thin layer plates with both enrichment procedures. No such DRZ were observed in control experiments using fractions from unexposed filters or from incubations without activating system. Total adduct levels produced by the nitro-PAH fractions were similar for both sites ranging from 30 to 45 adducts per 10(8) normal nucleotides. In contrast, the DNA binding of reductively activated oxy-PAH fractions was three times higher and the adduct pattern consisted of multiple discrete spots along the diagonal line on the thin layer plates. However, DNA adduct levels were not significantly different between the sampling sites. Both samples presented the same levels of mutagenic activity. The response in the Salmonella assay was typical of nitroaromatics. Although, more mutagenic activity was related to the nitro-PAH fraction in the Salmonella assay, the oxy-PAH fractions showed the highest DNA adduct levels. More studies are needed to elucidate the nature of the genotoxicants occurring in Sao Paulo atmospheric samples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.
Resumo:
This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.
Resumo:
We develop an affine jump diffusion (AJD) model with the jump-risk premium being determined by both idiosyncratic and systematic sources of risk. While we maintain the classical affine setting of the model, we add a finite set of new state variables that affect the paths of the primitive, under both the actual and the risk-neutral measure, by being related to the primitive's jump process. Those new variables are assumed to be commom to all the primitives. We present simulations to ensure that the model generates the volatility smile and compute the "discounted conditional characteristic function'' transform that permits the pricing of a wide range of derivatives.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embraces areas like resource allocation, scheduling, timetabling or vehicle routing. Constraint programming is a form of declarative programming in the sense that instead of specifying a sequence of steps to execute, it relies on properties of the solutions to be found, which are explicitly defined by constraints. The idea of constraint programming is to solve problems by stating constraints which must be satisfied by the solutions. Constraint programming is based on specialized constraint solvers that take advantage of constraints to search for solutions. The success and popularity of complex problem solving tools can be greatly enhanced by the availability of friendly user interfaces. User interfaces cover two fundamental areas: receiving information from the user and communicating it to the system; and getting information from the system and deliver it to the user. Despite its potential impact, adequate user interfaces are uncommon in constraint programming in general. The main goal of this project is to develop a graphical user interface that allows to, intuitively, represent constraint satisfaction problems. The idea is to visually represent the variables of the problem, their domains and the problem constraints and enable the user to interact with an adequate constraint solver to process the constraints and compute the solutions. Moreover, the graphical interface should be capable of configure the solver’s parameters and present solutions in an appealing interactive way. As a proof of concept, the developed application – GraphicalConstraints – focus on continuous constraint programming, which deals with real valued variables and numerical constraints (equations and inequalities). RealPaver, a state-of-the-art solver in continuous domains, was used in the application. The graphical interface supports all stages of constraint processing, from the design of the constraint network to the presentation of the end feasible space solutions as 2D or 3D boxes.
Resumo:
The venom of Crotalus durissus terrificus snakes presents various substances, including a serine protease with thrombin-like activity, called gyroxin, that clots plasmatic fibrinogen and promote the fibrin formation. The aim of this study was to purify and structurally characterize the gyroxin enzyme from Crotalus durissus terrificus venom. For isolation and purification, the following methods were employed: gel filtration on Sephadex G75 column and affinity chromatography on benzamidine Sepharose 6B; 12% SDS-PAGE under reducing conditions; N-terminal sequence analysis; cDNA cloning and expression through RT-PCR and crystallization tests. Theoretical molecular modeling was performed using bioinformatics tools based on comparative analysis of other serine proteases deposited in the NCBI (National Center for Biotechnology Information) database. Protein N-terminal sequencing produced a single chain with a molecular mass of similar to 30 kDa while its full-length cDNA had 714 bp which encoded a mature protein containing 238 amino acids. Crystals were obtained from the solutions 2 and 5 of the Crystal Screen Kit (R), two and one respectively, that reveal the protein constitution of the sample. For multiple sequence alignments of gyroxin-like B2.1 with six other serine proteases obtained from snake venoms (SVSPs), the preservation of cysteine residues and their main structural elements (alpha-helices, beta-barrel and loops) was indicated. The localization of the catalytic triad in His57, Asp102 and Ser198 as well as S1 and S2 specific activity sites in Thr193 and Gli215 amino acids was pointed. The area of recognition and cleavage of fibrinogen in SVSPs for modeling gyroxin B2.1 sequence was located at Arg60, Arg72, Gln75, Arg81, Arg82, Lis85, Glu86 and Lis87 residues. Theoretical modeling of gyroxin fraction generated a classical structure consisting of two alpha-helices, two beta-barrel structures, five disulfide bridges and loops in positions 37, 60, 70, 99, 148, 174 and 218. These results provided information about the functional structure of gyroxin allowing its application in the design of new drugs.
Resumo:
The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Um total de 19.770 pesos corporais de bovinos Guzerá, do nascimento aos 365 dias de idade, pertencentes ao banco de dados da Associação Brasileira dos Criadores de Zebu (ABCZ) foi analisado com os objetivos de comparar diferentes estruturas de variâncias residuais, considerando 1, 18, 28 e 53 classes residuais e funções de variância de ordens quadrática a quíntica; e estimar funções de co-variância de diferentes ordens para os efeitos genético aditivo direto, genético materno, de ambiente permanente de animal e de mãe e parâmetros genéticos para os pesos corporais usando modelos de regressão aleatória. Os efeitos aleatórios foram modelados por regressões polinomiais em escala de Legendre com ordens variando de linear a quártica. Os modelos foram comparados pelo teste de razão de verossimilhança e pelos critérios de Informação de Akaike e de Informação Bayesiano de Schwarz. O modelo com 18 classes heterogêneas foi o que melhor se ajustou às variâncias residuais, de acordo com os testes estatísticos, porém, o modelo com função de variância de quinta ordem também mostrou-se apropriado. Os valores de herdabilidade direta estimados foram maiores que os encontrados na literatura, variando de 0,04 a 0,53, mas seguiram a mesma tendência dos estimados pelas análises unicaracterísticas. A seleção para peso em qualquer idade melhoraria o peso em todas as idades no intervalo estudado.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)