945 resultados para Data modeling


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ketamine, an injectable anesthetic and analgesic consisting of a racemic mixture of S-and R-ketamine, is routinely used in veterinary and human medicine. Nevertheless, metabolism and pharmacokinetics of ketamine have not been characterized sufficiently in most animal species. An enantioselective CE assay for ketamine and its metabolites in microsomal preparations is described. Racemic ketamine was incubated with pooled microsomes from humans, horses and dogs over a 3 h time interval with frequent sample collection. CE data revealed that ketamine is metabolized enantioselectively to norketamine (NK), dehydronorketamine and three hydroxylated NK metabolites in all three species. The metabolic patterns formed differ in production rates of the metabolites and in stereoselectivity of the hydroxylated NK metabolites. In vitro pharmacokinetics of ketamine N-demethylation were established by incubating ten different concentrations of racemic ketamine and the single enantiomers of ketamine for 8 min and data modeling was based on Michaelis-Menten kinetics. These data revealed a reduced intrinsic clearance of the S-enantiomer in the racemic mixture compared with the single S-enantiomer in human microsomes, no difference in equine microsomes and the opposite effect in canine microsomes. The findings indicate species differences with possible relevance for the use of single S-ketamine versus racemic ketamine in the clinic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os métodos de ondas superficiais com ênfase nas ondas Rayleigh foram utilizados como o núcleo desse trabalho de Doutorado. Inicialmente, as ondas Rayleigh foram modeladas permitindo o estudo de sensibilidade de suas curvas de dispersão sob diferentes configurações de parâmetros físicos representando diversos modelos de camadas, em que pôde ser observado parâmetros com maior e menor sensibilidade e também alguns efeitos provocados por baixas razões de Poisson. Além disso, na fase de inversão dos dados a modelagem das ondas Rayleigh foi utilizada para a construção da função objeto, que agregada ao método de mínimos quadrados, a partir do método de Levenberg-Marquardt, permitiu a implementação de um algoritmo de busca local responsável pela inversão de dados das ondas superficiais. Por se tratar de um procedimento de busca local, o algoritmo de inversão foi complementado por uma etapa de pré-inversão com a geração de um modelo inicial para que o procedimento de inversão fosse mais rápido e eficiente. Visando uma eficiência ainda maior do procedimento de inversão, principalmente em modelos de camadas com inversão de velocidades, foi implementado um algoritmo de pós-inversão baseado em um procedimento de tentativa e erro minimizando os valores relativos da raiz quadrada do erro quadrático médio (REQMr) da inversão dos dados. Mais de 50 modelos de camadas foram utilizados para testar a modelagem, a pré-inversão, inversão e pós-inversão dos dados permitindo o ajuste preciso de parâmetros matemáticos e físicos presentes nos diversos scripts implementados em Matlab. Antes de inverter os dados adquiridos em campo, os mesmos precisaram ser tratados na etapa de processamento de dados, cujo objetivo principal é a extração da curva de dispersão originada devido às ondas superficiais. Para isso, foram implementadas, também em Matlab, três metodologias de processamento com abordagens matemáticas distintas. Essas metodologias foram testadas e avaliadas com dados sintéticos e reais em que foi possível constatar as virtudes e deficiências de cada metodologia estudada, bem como as limitações provocadas pela discretização dos dados de campo. Por último, as etapas de processamento, pré-inversão, inversão e pós-inversão dos dados foram unificadas para formar um programa de tratamento de dados de ondas superficiais (Rayleigh). Ele foi utilizado em dados reais originados pelo estudo de um problema geológico na Bacia de Taubaté em que foi possível mapear os contatos geológicos ao longo dos pontos de aquisição sísmica e compará-los a um modelo inicial existente baseado em observações geomorfológicas da área de estudos, mapa geológico da região e informações geológicas globais e locais dos movimentos tectônicos na região. As informações geofísicas associadas às geológicas permitiram a geração de um perfil analítico da região de estudos com duas interpretações geológicas confirmando a suspeita de neotectônica na região em que os contatos geológicos entre os depósitos Terciários e Quaternários foram identificados e se encaixaram no modelo inicial de hemi-graben com mergulho para Sudeste.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Context: Global Software Development (GSD) allows companies to take advantage of talent spread across the world. Most research has been focused on the development aspect. However, little if any attention has been paid to the management of GSD projects. Studies report a lack of adequate support for management’s decisions made during software development, further accentuated in GSD since information is scattered throughout multiple factories, stored in different formats and standards. Objective: This paper aims to improve GSD management by proposing a systematic method for adapting Business Intelligence techniques to software development environments. This would enhance the visibility of the development process and enable software managers to make informed decisions regarding how to proceed with GSD projects. Method: A combination of formal goal-modeling frameworks and data modeling techniques is used to elicitate the most relevant aspects to be measured by managers in GSD. The process is described in detail and applied to a real case study throughout the paper. A discussion regarding the generalisability of the method is presented afterwards. Results: The application of the approach generates an adapted BI framework tailored to software development according to the requirements posed by GSD managers. The resulting framework is capable of presenting previously inaccessible data through common and specific views and enabling data navigation according to the organization of software factories and projects in GSD. Conclusions: We can conclude that the proposed systematic approach allows us to successfully adapt Business Intelligence techniques to enhance GSD management beyond the information provided by traditional tools. The resulting framework is able to integrate and present the information in a single place, thereby enabling easy comparisons across multiple projects and factories and providing support for informed decisions in GSD management.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The authors propose a new approach to discourse analysis which is based on meta data from social networking behavior of learners who are submerged in a socially constructivist e-learning environment. It is shown that traditional data modeling techniques can be combined with social network analysis - an approach that promises to yield new insights into the largely uncharted domain of network-based discourse analysis. The chapter is treated as a non-technical introduction and is illustrated with real examples, visual representations, and empirical findings. Within the setting of a constructivist statistics course, the chapter provides an illustration of what network-based discourse analysis is about (mainly from a methodological point of view), how it is implemented in practice, and why it is relevant for researchers and educators.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two jamming cancellation algorithms are developed based on a stable solution of least squares problem (LSP) provided by regularization. They are based on filtered singular value decomposition (SVD) and modifications of the Greville formula. Both algorithms allow an efficient hardware implementation. Testing results on artificial data modeling difficult real-world situations are also provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses some aspects of hunter-gatherer spatial organization in southern South Patagonia, in later times to 10,000 cal yr BP. Various methods of spatial analysis, elaborated with a Geographic Information System (GIS) were applied to the distributional pattern of archaeological sites with radiocarbon dates. The shift in the distributional pattern of chronological information was assessed in conjunction with other lines of evidence within a biogeographic framework. Accordingly, the varying degrees of occupation and integration of coastal and interior spaces in human spatial organization are explained in association with the adaptive strategies hunter-gatherers have used over time. Both are part of the same human response to changes in risk and uncertainty variability in the region in terms of resource availability and environmental dynamics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The caffeine solubility in supercritical CO2 was studied by assessing the effects of pressure and temperature on the extraction of green coffee oil (GCO). The Peng-Robinson¹ equation of state was used to correlate the solubility of caffeine with a thermodynamic model and two mixing rules were evaluated: the classical mixing rule of van der Waals with two adjustable parameters (PR-VDW) and a density dependent one, proposed by Mohamed and Holder² with two (PR-MH, two parameters adjusted to the attractive term) and three (PR-MH3 two parameters adjusted to the attractive and one to the repulsive term) adjustable parameters. The best results were obtained with the mixing rule of Mohamed and Holder² with three parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

study-specific results, their findings should be interpreted with caution

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thermodynamic properties of bread dough (fusion enthalpy, apparent specific heat, initial freezing point and unfreezable water) were measured at temperatures from -40 degrees C to 35 degrees C using differential scanning calorimetry. The initial freezing point was also calculated based on the water activity of dough. The apparent specific heat varied as a function of temperature: specific heat in the freezing region varied from (1.7-23.1) J g(-1) degrees C(-1), and was constant at temperatures above freezing (2.7 J g(-1) degrees C(-1)). Unfreezable water content varied from (0.174-0.182) g/g of total product. Values of heat capacity as a function of temperature were correlated using thermodynamic models. A modification for low-moisture foodstuffs (such as bread dough) was successfully applied to the experimental data. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Interval-censored survival data, in which the event of interest is not observed exactly but is only known to occur within some time interval, occur very frequently. In some situations, event times might be censored into different, possibly overlapping intervals of variable widths; however, in other situations, information is available for all units at the same observed visit time. In the latter cases, interval-censored data are termed grouped survival data. Here we present alternative approaches for analyzing interval-censored data. We illustrate these techniques using a survival data set involving mango tree lifetimes. This study is an example of grouped survival data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.