973 resultados para probability models
Resumo:
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU).
Resumo:
In many situations probability models are more realistic than deterministic models. Several phenomena occurring in physics are studied as random phenomena changing with time and space. Stochastic processes originated from the needs of physicists.Let X(t) be a random variable where t is a parameter assuming values from the set T. Then the collection of random variables {X(t), t ∈ T} is called a stochastic process. We denote the state of the process at time t by X(t) and the collection of all possible values X(t) can assume, is called state space
Resumo:
The literature related to skew–normal distributions has grown rapidly in recent years but at the moment few applications concern the description of natural phenomena with this type of probability models, as well as the interpretation of their parameters. The skew–normal distributions family represents an extension of the normal family to which a parameter (λ) has been added to regulate the skewness. The development of this theoretical field has followed the general tendency in Statistics towards more flexible methods to represent features of the data, as adequately as possible, and to reduce unrealistic assumptions as the normality that underlies most methods of univariate and multivariate analysis. In this paper an investigation on the shape of the frequency distribution of the logratio ln(Cl−/Na+) whose components are related to waters composition for 26 wells, has been performed. Samples have been collected around the active center of Vulcano island (Aeolian archipelago, southern Italy) from 1977 up to now at time intervals of about six months. Data of the logratio have been tentatively modeled by evaluating the performance of the skew–normal model for each well. Values of the λ parameter have been compared by considering temperature and spatial position of the sampling points. Preliminary results indicate that changes in λ values can be related to the nature of environmental processes affecting the data
Resumo:
Introducción: El delirium es un trastorno de conciencia de inicio agudo asociado a confusión o disfunción cognitiva, se puede presentar hasta en 42% de pacientes, de los cuales hasta el 80% ocurren en UCI. El delirium aumenta la estancia hospitalaria, el tiempo de ventilación mecánica y la morbimortalidad. Se pretendió evaluar la prevalencia de periodo de delirium en adultos que ingresaron a la UCI en un hospital de cuarto nivel durante 2012 y los factores asociados a su desarrollo. Metodología Se realizó un estudio transversal con corte analítico, se incluyeron pacientes hospitalizados en UCI médica y UCI quirúrgica. Se aplicó la escala de CAM-ICU y el Examen Mínimo del Estado Mental para evaluar el estado mental. Las asociaciones significativas se ajustaron con análisis multivariado. Resultados: Se incluyeron 110 pacientes, el promedio de estancia fue 5 días; la prevalencia de periodo de delirium fue de 19.9%, la mediana de edad fue 64.5 años. Se encontró una asociación estadísticamente significativa entre el delirium y la alteración cognitiva de base, depresión, administración de anticolinérgicos y sepsis (p< 0,05). Discusión Hasta la fecha este es el primer estudio en la institución. La asociación entre delirium en la UCI y sepsis, uso de anticolinérgicos, y alteración cognitiva de base son consistentes y comparables con factores de riesgo descritos en la literatura mundial.
Resumo:
Resilience of rice cropping systems to potential global climate change will partly depend on temperature tolerance of pollen germination (PG) and tube growth (PTG). Germination of pollen of high temperature susceptible Oryza glaberrima Steud. (cv. CG14) and O. sativa L. ssp. indica (cv. IR64) and high temperature tolerant O. sativa ssp. aus (cv. N22), was assessed on a 5.6-45.4°C temperature gradient system. Mean maximum PG was 85% at 27°C with 1488 μm PTG at 25°C. The hypothesis that in each pollen grain, minimum temperature requirements (Tn) and maximum temperature limits (Tx) for germination operate independently was accepted by comparing multiplicative and subtractive probability models. The maximum temperature limit for PG in 50% of grains (Tx(50)) was lowest (29.8°C) in IR64 compared with CG14 (34.3°C) and N22 (35.6°C). Standard deviation (sx) of Tx was also low in IR64 (2.3°C) suggesting that the mechanism of IR64's susceptibility to high temperatures may relate to PG. Optimum germination temperatures and thermal times for 1mm PTG were not linked to tolerating high temperatures at anthesis. However, the parameters Tx(50) and sx in the germination model define new pragmatic criteria for successful and resilient PG, preferable to the more traditional cardinal (maximum and minimum) temperatures.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Biociências - FCLAS
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
Over the last 2 decades, survival rates in critically ill cancer patients have improved. Despite the increase in survival, the intensive care unit (ICU) continues to be a location where end-of-life care takes place. More than 20% of deaths in the United States occur after admission to an ICU, and as baby boomers reach the seventh and eighth decades of their lives, the volume of patients in the ICU is predicted to rise. The aim of this study was to evaluate intensive care unit utilization among patients with cancer who were at the end of life. End of life was defined using decedent and high-risk cohort study designs. The decedent study evaluated characteristics and ICU utilization during the terminal hospital stay among patients who died at The University of Texas MD Anderson Cancer Center during 2003-2007. The high-risk cohort study evaluated characteristics and ICU utilization during the index hospital stay among patients admitted to MD Anderson during 2003-2007 with a high risk of in-hospital mortality. Factors associated with higher ICU utilization in the decedent study included non-local residence, hematologic and non-metastatic solid tumor malignancies, malignancy diagnosed within 2 months, and elective admission to surgical or pediatric services. Having a palliative care consultation on admission was associated with dying in the hospital without ICU services. In the cohort of patients with high risk of in-hospital mortality, patients who went to the ICU were more likely to be younger, male, with newly diagnosed non-metastatic solid tumor or hematologic malignancy, and admitted from the emergency center to one of the surgical services. A palliative care consultation on admission was associated with a decreased likelihood of having an ICU stay. There were no differences in ethnicity, marital status, comorbidities, or insurance status between patients who did and did not utilize ICU services. Inpatient mortality probability models developed for the general population are inadequate in predicting in-hospital mortality for patients with cancer. The following characteristics that differed between the decedent study and high-risk cohort study can be considered in future research to predict risk of in-hospital mortality for patients with cancer: ethnicity, type and stage of malignancy, time since diagnosis, and having advance directives. Identifying those at risk can precipitate discussions in advance to ensure care remains appropriate and in accordance with the wishes of the patient and family.^
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
A floresta Amazônica possui um papel ambiental, social e econômico importante para a região, para o país e para o mundo. Dessa forma, técnicas de exploração que visam a diminuição dos impactos causados à floresta são essenciais. Com isso, o objetivo dessa tese é comparar a Exploração de Impacto Reduzido com a Exploração Convencional na Amazônia brasileira através de modelos empíricos de árvore individual de crescimento e produção. O experimento foi instalado na fazenda Agrossete, localizada em Paragominas - PA. Em 1993, três áreas dessa fazenda foram selecionadas para exploração. Na primeira área, 105 hectares foram explorados através da Exploração de Impacto Reduzido. Na segunda área, 75 hectares foram submetidos à Exploração Convencional. E, por fim, a terceira área foi mantida como área testemunha. A coleta de dados de diâmetro à altura do peito e a identificação das espécies dentro de uma parcela de 24,5 hectares, instalada aleatoriamente em cada área, foi realizada nos anos de 1993 (antes da colheita), 1994 (seis meses depois da colheita), 1995, 1996, 1998, 2000, 2003, 2006 e 2009. Dessa forma, as três áreas foram comparadas através do ajuste de um modelo de incremento diamétrico, considerando que efeito estocástico podia assumir outras quatro distribuições além da distribuição normal, de um modelo de probabilidade de mortalidade e de um modelo de probabilidade de recrutamento. O comportamento do incremento diamétrico indicou que as áreas que foram submetidas a exploração possuem o mesmo comportamento em quase todos os grupos de espécies, com exceção do grupo de espécies intermediárias. Os indivíduos que são submetidos a exploração possuem um maior crescimento em diâmetros quando comparados com área que não sofreu exploração. Além disso, assumir o efeito estocástico com distribuição Weibull melhorou o ajuste dos modelos. Em relação à probabilidade de mortalidade, novamente as áreas que sofreram exploração possuem comportamento semelhante quanto à mortalidade, mas diferente da área que não foi explorada, sendo que os indivíduos localizados nas áreas exploradas possuem uma maior probabilidade de morte em relação aos presentes na área não explorada. Os modelos de probabilidade de recrutamento indicaram diferença apenas entre as áreas exploradas e a área controle. Sendo que, as áreas exploradas apresentaram uma maior taxa de recrumento em comparação a área não explorada. Portanto, o comportamento individual das árvores após a exploração é o mesmo na Exploração Convencional e na Exploração de Impacto Reduzido.
Resumo:
The study developed statistical techniques to evaluate visual field progression for use with the Humphrey Field Analyzer (HFA). The long-term fluctuation (LF) was evaluated in stable glaucoma. The magnitude of both LF components showed little relationship with MD, CPSD and SF. An algorithm was proposed for determining the clinical necessity for a confirmatory follow-up examination. The between-examination variability was determined for the HFA Standard and FASTPAC algorithms in glaucoma. FASTPAC exhibited greater between-examination variability than the Standard algorithm across the range of sensitivities and with increasing eccentricity. The difference in variability between the algorithms had minimal clinical significance. The effect of repositioning the baseline in the Glaucoma Change Probability Analysis (GCPA) was evaluated. The global baseline of the GCPA limited the detection of progressive change at a single stimulus location. A new technique, pointwise univariate linear regressions (ULR), of absolute sensitivity and, of pattern deviation, against time to follow-up was developed. In each case, pointwise ULR was more sensitive to localised progressive changes in sensitivity than ULR of MD, alone. Small changes in sensitivity were more readily determined by the pointwise ULR than by the GCPA. A comparison between the outcome of pointwise ULR for all fields and for the last six fields manifested linear and curvilinear declines in the absolute sensitivity and the pattern deviation. A method for delineating progressive loss in glaucoma, based upon the error in the forecasted sensitivity of a multivariate model, was developed. Multivariate forecasting exhibited little agreement with GCPA in glaucoma but showed promise for monitoring visual field progression in OHT patients. The recovery of sensitivity in optic neuritis over time was modelled with a Cumulative Gaussian function. The rate and level of recovery was greater in the peripheral than the central field. Probability models to forecast the field of recovery were proposed.
Resumo:
Добри Данков, Владимир Русинов, Мария Велинова, Жасмина Петрова - Изследвана е химическа реакция чрез два начина за моделиране на вероятността за химическа реакция използвайки Direct Simulation Monte Carlo метод. Изследван е порядъка на разликите при температурите и концентрациите чрез тези начини. Когато активността на химическата реакция намалява, намаляват и разликите между концентрациите и температурите получени по двата начина. Ключови думи: Механика на флуидите, Кинетична теория, Разреден газ, DSMC
Resumo:
The real purpose of collecting big data is to identify causality in the hope that this will facilitate credible predictivity . But the search for causality can trap one into infinite regress, and thus one takes refuge in seeking associations between variables in data sets. Regrettably, the mere knowledge of associations does not enable predictivity. Associations need to be embedded within the framework of probability calculus to make coherent predictions. This is so because associations are a feature of probability models, and hence they do not exist outside the framework of a model. Measures of association, like correlation, regression, and mutual information merely refute a preconceived model. Estimated measures of associations do not lead to a probability model; a model is the product of pure thought. This paper discusses these and other fundamentals that are germane to seeking associations in particular, and machine learning in general. ACM Computing Classification System (1998): H.1.2, H.2.4., G.3.
Resumo:
Introducción Los sistemas de puntuación para predicción se han desarrollado para medir la severidad de la enfermedad y el pronóstico de los pacientes en la unidad de cuidados intensivos. Estas medidas son útiles para la toma de decisiones clínicas, la estandarización de la investigación, y la comparación de la calidad de la atención al paciente crítico. Materiales y métodos Estudio de tipo observacional analítico de cohorte en el que reviso las historias clínicas de 283 pacientes oncológicos admitidos a la unidad de cuidados intensivos (UCI) durante enero de 2014 a enero de 2016 y a quienes se les estimo la probabilidad de mortalidad con los puntajes pronósticos APACHE IV y MPM II, se realizó regresión logística con las variables predictoras con las que se derivaron cada uno de los modelos es sus estudios originales y se determinó la calibración, la discriminación y se calcularon los criterios de información Akaike AIC y Bayesiano BIC. Resultados En la evaluación de desempeño de los puntajes pronósticos APACHE IV mostro mayor capacidad de predicción (AUC = 0,95) en comparación con MPM II (AUC = 0,78), los dos modelos mostraron calibración adecuada con estadístico de Hosmer y Lemeshow para APACHE IV (p = 0,39) y para MPM II (p = 0,99). El ∆ BIC es de 2,9 que muestra evidencia positiva en contra de APACHE IV. Se reporta el estadístico AIC siendo menor para APACHE IV lo que indica que es el modelo con mejor ajuste a los datos. Conclusiones APACHE IV tiene un buen desempeño en la predicción de mortalidad de pacientes críticamente enfermos, incluyendo pacientes oncológicos. Por lo tanto se trata de una herramienta útil para el clínico en su labor diaria, al permitirle distinguir los pacientes con alta probabilidad de mortalidad.