898 resultados para method of bounded increments
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
This thesis project is motivated by the potential problem of using observational data to draw inferences about a causal relationship in observational epidemiology research when controlled randomization is not applicable. Instrumental variable (IV) method is one of the statistical tools to overcome this problem. Mendelian randomization study uses genetic variants as IVs in genetic association study. In this thesis, the IV method, as well as standard logistic and linear regression models, is used to investigate the causal association between risk of pancreatic cancer and the circulating levels of soluble receptor for advanced glycation end-products (sRAGE). Higher levels of serum sRAGE were found to be associated with a lower risk of pancreatic cancer in a previous observational study (255 cases and 485 controls). However, such a novel association may be biased by unknown confounding factors. In a case-control study, we aimed to use the IV approach to confirm or refute this observation in a subset of study subjects for whom the genotyping data were available (178 cases and 177 controls). Two-stage IV method using generalized method of moments-structural mean models (GMM-SMM) was conducted and the relative risk (RR) was calculated. In the first stage analysis, we found that the single nucleotide polymorphism (SNP) rs2070600 of the receptor for advanced glycation end-products (AGER) gene meets all three general assumptions for a genetic IV in examining the causal association between sRAGE and risk of pancreatic cancer. The variant allele of SNP rs2070600 of the AGER gene was associated with lower levels of sRAGE, and it was neither associated with risk of pancreatic cancer, nor with the confounding factors. It was a potential strong IV (F statistic = 29.2). However, in the second stage analysis, the GMM-SMM model failed to converge due to non- concaveness probably because of the small sample size. Therefore, the IV analysis could not support the causality of the association between serum sRAGE levels and risk of pancreatic cancer. Nevertheless, these analyses suggest that rs2070600 was a potentially good genetic IV for testing the causality between the risk of pancreatic cancer and sRAGE levels. A larger sample size is required to conduct a credible IV analysis.^
Resumo:
This paper proposes a method of landscape characterisation and assessment of public works associated with fluvial landscapes, which is validated in the middle section of the Tajo River. In this method, a set of criteria is identified that unifies various characteristics of the landscape associated to the infrastructures. A specific weight is then assigned to each criterion in such a way as to produce a semi-quantitative value ranging from a minimum value of 0 to a maximum value of 10. Taken together, these criteria enable us to describe and assess the value of the public works selected for study, in this case helping us to evaluate the sections of the River Tajo analysed in our study area. Accordingly, the value of all the infrastructures associated to a stretch of the river covering several hundred kilometres was determined and after dividing this stretch into sections, they were compared under equivalent conditions to provide a hierarchal ranking.
Resumo:
Dynamic soil-structure interaction has been for a long time one of the most fascinating areas for the engineering profession. The building of large alternating machines and their effects on surrounding structures as well as on their own functional behavior, provided the initial impetus; a large amount of experimental research was done,and the results of the Russian and German groups were especially worthwhile. Analytical results by Reissner and Sehkter were reexamined by Quinlan, Sung, et. al., and finally Veletsos presented the first set of reliable results. Since then, the modeling of the homogeneous, elastic halfspace as a equivalent set of springs and dashpots has become an everyday tool in soil engineering practice, especially after the appearance of the fast Fourier transportation algorithm, which makes possible the treatment of the frequency-dependent characteristics of the equivalent elements in a unified fashion with the general method of analysis of the structure. Extensions to the viscoelastic case, as well as to embedded foundations and complicated geometries, have been presented by various authors. In general, they used the finite element method with the well known problems of geometric truncations and the subsequent use of absorbing boundaries. The properties of boundary integral equation methods are, in our opinion, specially well suited to this problem, and several of the previous results have confirmed our opinion. In what follows we present the general features related to steady-state elastodynamics and a series of results showing the splendid results that the BIEM provided. Especially interesting are the outputs obtained through the use of the so-called singular elements, whose description is incorporated at the end of the paper. The reduction in time spent by the computer and the small number of elements needed to simulate realistically the global properties of the halfspace make this procedure one of the most interesting applications of the BIEM.
Resumo:
Questions of "viability" evaluation of innovation projects are considered in this article. As a method of evaluation Hidden Markov Models are used. Problem of determining model parameters, which reproduce test data with highest accuracy are solving. For training the model statistical data on the implementation of innovative projects are used. Baum-Welch algorithm is used as a training algorithm.
Resumo:
Objective: Expectancies about the outcomes of alcohol consumption are widely accepted as important determinants of drinking. This construct is increasingly recognized as a significant element of psychological interventions for alcohol-related problems. Much effort has been invested in producing reliable and valid instruments to measure this construct for research and clinical purposes, but very few have had their factor structure subjected to adequate validation. Among them, the Drinking Expectancies Questionnaire (DEQ) was developed to address some theoretical and design issues with earlier expectancy scales. Exploratory factor analyses, in addition to validity and reliability analyses, were performed when the original questionnaire was developed. The object of this study was to undertake a confirmatory analysis of the factor structure of the DEQ. Method: Confirmatory factor analysis through LISREL 8 was performed using a randomly split sample of 679 drinkers. Results: Results suggested that a new 5-factor model, which differs slightly from the original 6-factor version, was a more robust measure of expectancies. A new method of scoring the DEQ consistent with this factor structure is presented. Conclusions: The present study shows more robust psychometric properties of the DEQ using the new factor structure.
Resumo:
Pseudo-ternary phase diagrams of the polar lipids Quil A, cholesterol (Chol) and phosphatidylcholine (PC) in aqueous mixtures prepared by the lipid film hydration method (where dried lipid film of phospholipids and cholesterol are hydrated by an aqueous solution of Quil A) were investigated in terms of the types of particulate structures formed therein. Negative staining transmission electron microscopy and polarized light microscopy were used to characterize the colloidal and coarse dispersed particles present in the systems. Pseudo-ternary phase diagrams were established for lipid mixtures hydrated in water and in Tris buffer (pH 7.4). The effect of equilibration time was also studied with respect to systems hydrated in water where the samples were stored for 2 months at 4degreesC. Depending on the mass ratio of Quil A, Chol and PC in the systems, various colloidal particles including ISCOM matrices, liposomes, ring-like micelles and worm-like micelles were observed. Other colloidal particles were also observed as minor structures in the presence of these predominant colloids including helices, layered structures and lamellae (hexagonal pattern of ring-like micelles). In terms of the conditions which appeared to promote the formation of ISCOM matrices, the area of the phase diagrams associated with systems containing these structures increased in the order: hydrated in water/short equilibration period < hydrated in buffer/short equilibration period < hydrated in water/prolonged equilibration period. ISCOM matrices appeared to form over time from samples, which initially contained a high concentration of ring-like micelles suggesting that these colloidal structures may be precursors to ISCOM matrix formation. Helices were also frequently found in samples containing ISCOM matrices as a minor colloidal structure. Equilibration time and presence of buffer salts also promoted the formation of liposomes in systems not containing Quil A. These parameters however, did not appear to significantly affect the occurrence and predominance of other structures present in the pseudo-binary systems containing Quil A. Pseudo-ternary phase diagrams of PC, Chol and Quil A are important to identify combinations which will produce different colloidal structures, particularly ISCOM matrices, by the method of lipid film hydration. Colloidal structures comprising these three components are readily prepared by hydration of dried lipid films and may have application in vaccine delivery where the functionality of ISCOMs has clearly been demonstrated. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
We present a new method of modeling imaging of laser beams in the presence of diffraction. Our method is based on the concept of first orthogonally expanding the resultant diffraction field (that would have otherwise been obtained by the laborious application of the Huygens diffraction principle) and then representing it by an effective multimodal laser beam with different beam parameters. We show not only that the process of obtaining the new beam parameters is straightforward but also that it permits a different interpretation of the diffraction-caused focal shift in laser beams. All of the criteria that we have used to determine the minimum number of higher-order modes needed to accurately represent the diffraction field show that the mode-expansion method is numerically efficient. Finally, the characteristics of the mode-expansion method are such that it allows modeling of a vast array of diffraction problems, regardless of the characteristics of the incident laser beam, the diffracting element, or the observation plane. (C) 2005 Optical Society of America.
Resumo:
The country-product-dummy (CPD) method, originally proposed in Summers (1973), has recently been revisited in its weighted formulation to handle a variety of data related situations (Rao and Timmer, 2000, 2003; Heravi et al., 2001; Rao, 2001; Aten and Menezes, 2002; Heston and Aten, 2002; Deaton et al., 2004). The CPD method is also increasingly being used in the context of hedonic modelling instead of its original purpose of filling holes in Summers (1973). However, the CPD method is seen, among practitioners, as a black box due to its regression formulation. The main objective of the paper is to establish equivalence of purchasing power parities and international prices derived from the application of the weighted-CPD method with those arising out of the Rao-system for multilateral comparisons. A major implication of this result is that the weighted-CPD method would then be a natural method of aggregation at all levels of aggregation within the context of international comparisons.
Resumo:
Background/Aims: Positron emission tomography has been applied to study cortical activation during human swallowing, but employs radio-isotopes precluding repeated experiments and has to be performed supine, making the task of swallowing difficult. Here we now describe Synthetic Aperture Magnetometry (SAM) as a novel method of localising and imaging the brain's neuronal activity from magnetoencephalographic (MEG) signals to study the cortical processing of human volitional swallowing in the more physiological prone position. Methods: In 3 healthy male volunteers (age 28–36), 151-channel whole cortex MEG (Omega-151, CTF Systems Inc.) was recorded whilst seated during the conditions of repeated volitional wet swallowing (5mls boluses at 0.2Hz) or rest. SAM analysis was then performed using varying spatial filters (5–60Hz) before co-registration with individual MRI brain images. Activation areas were then identified using standard sterotactic space neuro-anatomical maps. In one subject repeat studies were performed to confirm the initial study findings. Results: In all subjects, cortical activation maps for swallowing could be generated using SAM, the strongest activations being seen with 10–20Hz filter settings. The main cortical activations associated with swallowing were in: sensorimotor cortex (BA 3,4), insular cortex and lateral premotor cortex (BA 6,8). Of relevance, each cortical region displayed consistent inter-hemispheric asymmetry, to one or other hemisphere, this being different for each region and for each subject. Intra-subject comparisons of activation localisation and asymmetry showed impressive reproducibility. Conclusion: SAM analysis using MEG is an accurate, repeatable, and reproducible method for studying the brain processing of human swallowing in a more physiological manner and provides novel opportunities for future studies of the brain-gut axis in health and disease.
Resumo:
A method of determining the spatial pattern of any histological feature in sections of brain tissue which can be measured quantitatively is described and compared with a previously described method. A measurement of a histological feature such as density, area, amount or load is obtained for a series of contiguous sample fields. The regression coefficient (β) is calculated from the measurements taken in pairs, first in pairs of adjacent samples and then in pairs of samples taken at increasing degrees of separation between them, i.e. separated by 2, 3, 4,..., n units. A plot of β versus the degree of separation between the pairs of sample fields reveals whether the histological feature is distributed randomly, uniformly or in clusters. If the feature is clustered, the analysis determines whether the clusters are randomly or regularly distributed, the mean size of the clusters and the spacing of the clusters. The method is simple to apply and interpret and is illustrated using simulated data and studies of the spatial patterns of blood vessels in the cerebral cortex of normal brain, the degree of vacuolation of the cortex in patients with Creutzfeldt-Jacob disease (CJD) and the characteristic lesions present in Alzheimer's disease (AD). Copyright (C) 2000 Elsevier Science B.V.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
We propose and investigate a method for the stable determination of a harmonic function from knowledge of its value and its normal derivative on a part of the boundary of the (bounded) solution domain (Cauchy problem). We reformulate the Cauchy problem as an operator equation on the boundary using the Dirichlet-to-Neumann map. To discretize the obtained operator, we modify and employ a method denoted as Classic II given in [J. Helsing, Faster convergence and higher accuracy for the Dirichlet–Neumann map, J. Comput. Phys. 228 (2009), pp. 2578–2576, Section 3], which is based on Fredholm integral equations and Nyström discretization schemes. Then, for stability reasons, to solve the discretized integral equation we use the method of smoothing projection introduced in [J. Helsing and B.T. Johansson, Fast reconstruction of harmonic functions from Cauchy data using integral equation techniques, Inverse Probl. Sci. Eng. 18 (2010), pp. 381–399, Section 7], which makes it possible to solve the discretized operator equation in a stable way with minor computational cost and high accuracy. With this approach, for sufficiently smooth Cauchy data, the normal derivative can also be accurately computed on the part of the boundary where no data is initially given.
Resumo:
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.
Resumo:
For the development of communication systems such as Internet of Things, integrating communication with power supplies is an attractive solution to reduce supply cost. This paper presents a novel method of power/signal dual modulation (PSDM), by which signal transmission is integrated with power conversion. This method takes advantage of the intrinsic ripple initiated in switch mode power supplies as signal carriers, by which cost-effective communications can be realized. The principles of PSDM are discussed, and two basic dual modulation methods (specifically PWM/FSK and PWM/PSK) are concluded. The key points of designing a PWM/FSK system, including topology selection, carrier shape, and carrier frequency, are discussed to provide theoretical guidelines. A practical signal modulation-demodulation method is given, and a prototype system provides experimental results to verify the effectiveness of the proposed solution.