952 resultados para r-functions


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We construct $x^0$ in ${\Bbb R}^{\Bbb N}$ and a row-finite matrix $T=\{T_{i,j}(t)\}_{i,j\in\N}$ of polynomials of one real variable $t$ such that the Cauchy problem $\dot x(t)=T_tx(t)$, $x(0)=x^0$ in the Fr\'echet space $\R^\N$ has no solutions. We also construct a row-finite matrix $A=\{A_{i,j}(t)\}_{i,j\in\N}$ of $C^\infty(\R)$ functions such that the Cauchy problem $\dot x(t)=A_tx(t)$, $x(0)=x^0$ in ${\Bbb R}^{\Bbb N}$ has no solutions for any $x^0\in{\Bbb R}^{\Bbb N}\setminus\{0\}$. We provide some sufficient condition of solvability and of unique solvability for linear ordinary differential equations $\dot x(t)=T_tx(t)$ with matrix elements $T_{i,j}(t)$ analytically dependent on $t$.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

MSC 2010: 30C10, 32A30, 30G35

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many common diseases, such as the flu and cardiovascular disease, increase markedly in winter and dip in summer. These seasonal patterns have been part of life for millennia and were first noted in ancient Greece by both Hippocrates and Herodotus. Recent interest has focused on climate change, and the concern that seasons will become more extreme with harsher winter and summer weather. We describe a set of R functions designed to model seasonal patterns in disease. We illustrate some simple descriptive and graphical methods, a more complex method that is able to model non-stationary patterns, and the case–crossover for controlling for seasonal confounding.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Heatwaves could cause the population excess death numbers to be ranged from tens to thousands within a couple of weeks in a local area. An excess mortality due to a special event (e.g., a heatwave or an epidemic outbreak) is estimated by subtracting the mortality figure under ‘normal’ conditions from the historical daily mortality records. The calculation of the excess mortality is a scientific challenge because of the stochastic temporal pattern of the daily mortality data which is characterised by (a) the long-term changing mean levels (i.e., non-stationarity); (b) the non-linear temperature-mortality association. The Hilbert-Huang Transform (HHT) algorithm is a novel method originally developed for analysing the non-linear and non-stationary time series data in the field of signal processing, however, it has not been applied in public health research. This paper aimed to demonstrate the applicability and strength of the HHT algorithm in analysing health data. Methods Special R functions were developed to implement the HHT algorithm to decompose the daily mortality time series into trend and non-trend components in terms of the underlying physical mechanism. The excess mortality is calculated directly from the resulting non-trend component series. Results The Brisbane (Queensland, Australia) and the Chicago (United States) daily mortality time series data were utilized for calculating the excess mortality associated with heatwaves. The HHT algorithm estimated 62 excess deaths related to the February 2004 Brisbane heatwave. To calculate the excess mortality associated with the July 1995 Chicago heatwave, the HHT algorithm needed to handle the mode mixing issue. The HHT algorithm estimated 510 excess deaths for the 1995 Chicago heatwave event. To exemplify potential applications, the HHT decomposition results were used as the input data for a subsequent regression analysis, using the Brisbane data, to investigate the association between excess mortality and different risk factors. Conclusions The HHT algorithm is a novel and powerful analytical tool in time series data analysis. It has a real potential to have a wide range of applications in public health research because of its ability to decompose a nonlinear and non-stationary time series into trend and non-trend components consistently and efficiently.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The microscopic properties of a two-dimensional model dense fluid of Lennard-Jones disks have been studied using the so-called "molecular dynamics" method. Analyses of the computer-generated simulation data in terms of "conventional" thermodynamic and distribution functions verify the physical validity of the model and the simulation technique.

The radial distribution functions g(r) computed from the simulation data exhibit several subsidiary features rather similar to those appearing in some of the g(r) functions obtained by X-ray and thermal neutron diffraction measurements on real simple liquids. In the case of the model fluid, these "anomalous" features are thought to reflect the existence of two or more alternative configurations for local ordering.

Graphical display techniques have been used extensively to provide some intuitive insight into the various microscopic phenomena occurring in the model. For example, "snapshots" of the instantaneous system configurations for different times show that the "excess" area allotted to the fluid is collected into relatively large, irregular, and surprisingly persistent "holes". Plots of the particle trajectories over intervals of 2.0 to 6.0 x 10-12 sec indicate that the mechanism for diffusion in the dense model fluid is "cooperative" in nature, and that extensive diffusive migration is generally restricted to groups of particles in the vicinity of a hole.

A quantitative analysis of diffusion in the model fluid shows that the cooperative mechanism is not inconsistent with the statistical predictions of existing theories of singlet, or self-diffusion in liquids. The relative diffusion of proximate particles is, however, found to be retarded by short-range dynamic correlations associated with the cooperative mechanism--a result of some importance from the standpoint of bimolecular reaction kinetics in solution.

A new, semi-empirical treatment for relative diffusion in liquids is developed, and is shown to reproduce the relative diffusion phenomena observed in the model fluid quite accurately. When incorporated into the standard Smoluchowski theory of diffusion-controlled reaction kinetics, the more exact treatment of relative diffusion is found to lower the predicted rate of reaction appreciably.

Finally, an entirely new approach to an understanding of the liquid state is suggested. Our experience in dealing with the simulation data--and especially, graphical displays of the simulation data--has led us to conclude that many of the more frustrating scientific problems involving the liquid state would be simplified considerably, were it possible to describe the microscopic structures characteristic of liquids in a concise and precise manner. To this end, we propose that the development of a formal language of partially-ordered structures be investigated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The effects. of moisture, cation concentration, dens ity , temper~ t ure and grai n si ze on the electrical resistivity of so il s are examined using laboratory prepared soils. An i nexpen si ve method for preparing soils of different compositions was developed by mixing various size fractions i n the laboratory. Moisture and cation c oncentration are related to soil resistivity by powe r functions, whereas soil resistiv ity and temperature, density, Yo gravel, sand , sil t, and clay are related by exponential functions . A total of 1066 cases (8528 data) from all the experiments were used in a step-wise multiple linear r egression to determine the effect of each variable on soil resistivity. Six variables out of the eight variables studied account for 92.57/. of the total variance in so il resistivity with a correlation coefficient of 0.96. The other two variables (silt and gravel) did not increase the · variance. Moisture content was found to be - the most important Yo clay. variable- affecting s oil res istivi ty followed by These two variables account for 90.81Yo of the total variance in soil resistivity with a correlation ~oefficient ·.of 0 . 95. Based on these results an equation to ' ~~ed{ ct soil r esist ivi ty using moisture and Yo clay is developed . To t est the predicted equation, resistivity measurements were made on natural soils both in s i tu a nd i n the laboratory. The data show that field and laboratory measurements are comparable. The predicted regression line c losely coinciqes with resistivity data from area A and area B soils ~clayey and silty~clayey sands). Resistivity data and the predicted regression line in the case of c layey soils (clays> 40%) do not coincide, especially a t l ess than 15% moisture. The regression equation overestimates the resistivity of so i l s from area C and underestimates for area D soils. Laboratory prepared high clay soils give similar trends. The deviations are probably caused by heterogeneous distribution of mo i sture and difference in the type o f cl ays present in these soils.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e.g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. RESULTS: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/~vpopovic/research/ CONCLUSION: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las funciones de segundo orden son cada vez más empleadas en el análisis de procesos ecológicos. En este trabajo presentamos dos funciones de 2º orden desarrolladas recientemente que permiten analizar la interacción espacio-temporal entre dos especies o tipos funcionales de individuos. Estas funciones han sido desarrolladas para el estudio de interacciones entre especies en masas forestales a partir de la actual distribución diamétrica de los árboles. La primera de ellas es la función bivariante para procesos de puntos con marca Krsmm, que permite analizar la correlación espacial de una variable entre los individuos pertenecientes a dos especies en función de la distancia. La segunda es la función de reemplazo , que permite analizar la asociación entre los individuos pertenecientes a dos especies en función de la diferencia entre sus diámetros u otra variable asociada a dichos individuos. Para mostrar el comportamiento de ambas funciones en el análisis de sistemas forestales en los que operan diferentes procesos ecológicos se presentan tres casos de estudio: una masa mixta de Pinus pinea L. y Pinus pinaster Ait. en la Meseta Norte, un bosque de niebla de la Región Tropical Andina y el ecotono entre las masas de Quercus pyrenaica Willd. y Pinus sylvestris L. en el Sistema Central, en los que tanto la función Krsmm como la función r se utilizan para analizar la dinámica forestal a partir de parcelas experimentales con todos los árboles localizados y de parcelas de inventario.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the process of engineering design of structural shapes, the flat plate analysis results can be generalized to predict behaviors of complete structural shapes. In this case, the purpose of this project is to analyze a thin flat plate under conductive heat transfer and to simulate the temperature distribution, thermal stresses, total displacements, and buckling deformations. The current approach in these cases has been using the Finite Element Method (FEM), whose basis is the construction of a conforming mesh. In contrast, this project uses the mesh-free Scan Solve Method. This method eliminates the meshing limitation using a non-conforming mesh. I implemented this modeling process developing numerical algorithms and software tools to model thermally induced buckling. In addition, convergence analysis was achieved, and the results were compared with FEM. In conclusion, the results demonstrate that the method gives similar solutions to FEM in quality, but it is computationally less time consuming.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Partendo da un’analisi dei problemi che si incontrano nella fase di conceptual design, si presentano le diverse tecniche di modellazione tridimensionale, con particolare attenzione al metodo subdivision e agli algoritmi che lo governano (Chaikin, Doo – Sabin). Vengono poi proposti alcuni esempi applicativi della modellazione free form e skeleton, con una successiva comparazione, sugli stessi modelli, delle sequenze e operazioni necessarie con le tradizionali tecniche di modellazione parametrica. Si riporta un esempio dell’utilizzo del software IronCAD, il primo software a unire la modellazione parametrica e diretta. Si descrivono le limitazioni della modellazione parametrica e di quella history free nella fase concettuale di un progetto, per arrivare a definire le caratteristiche della hybrid modeling, nuovo approccio alla modellazione. Si presenta brevemente il prototipo, in fase di sviluppo, che tenta di applicare concretamente i concetti dell’hybrid modeling e che vuole essere la base di partenza per una nuova generazione di softwares CAD. Infine si presenta la possibilità di ottenere simulazioni real time su modelli che subiscono modifiche topologiche. La simulazione real time è permessa dalla ridefinizione in forma parametrica del problema lineare elastico che viene successivamente risolto mediante l’applicazione congiunta delle R – Functions e del metodo PGD. Seguono esempi di simulazione real time.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is proved that the Riesz means S(R)(delta)f, delta > 0, for the Hermite expansions on R(n), n greater-than-or-equal-to 2, satisfy the uniform estimates \\S(R)(delta)f\\p less-than-or-equal-to C \\f\\p for all radial functions if and only if p lies in the interval 2n/(n + 1 + 2delta) < p < 2n/(n - 1 - 2delta).