924 resultados para elliptic curve
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
The analysis of the dispersion equation for surface magnetoplasmons in the Faraday configuration for the degenerate case of decaying constants being equal is given from the point of view of understanding the non-existence of the “degenerate modes”. This analysis also shows that there exist well defined “degenerate points” on the dispersion curve with electromagnetic fields varying linearly over small distances taken away from the interface.
Resumo:
This study investigates the relationship between per capita carbon dioxide (CO2) emissions and per capita GDP in Australia, while controlling for technological state as measured by multifactor productivity and export of black coal. Although technological progress seems to play a critical role in achieving long term goals of CO2 reduction and economic growth, empirical studies have often considered time trend to proxy technological change. However, as discoveries and diffusion of new technologies may not progress smoothly with time, the assumption of a deterministic technological progress may be incorrect in the long run. The use of multifactor productivity as a measure of technological state, therefore, overcomes the limitations and provides practical policy directions. This study uses recently developed bound-testing approach, which is complemented by Johansen- Juselius maximum likelihood approach and a reasonably large sample size to investigate the cointegration relationship. Both of the techniques suggest that cointegration relationship exists among the variables. The long-run and short-run coefficients of CO2 emissions function is estimated using ARDL approach. The empirical findings in the study show evidence of the existence of Environmental Kuznets Curve type relationship for per capita CO2 emissions in the Australian context. The technology as measured by the multifactor productivity, however, is not found as an influencing variable in emissionsincome trajectory.
Resumo:
A computer code is developed for the numerical prediction of natural convection in rectangular two-dimensional cavities at high Rayleigh numbers. The governing equations are retained in the primitive variable form. The numerical method is based on finite differences and an ADI scheme. Convective terms may be approximated with either central or hybrid differencing for greater stability. A non-uniform grid distribution is possible for greater efficiency. The pressure is dealt with via a SIMPLE type algorithm and the use of a fast elliptic solver for the solenoidal velocity correction field significantly reduces computing times. Preliminary results indicate that the code is reasonably accurate, robust and fast compared with existing benchmarks and finite difference based codes, particularly at high Rayleigh numbers. Extension to three-dimensional problems and turbulence studies in similar geometries is readily possible and indicated.
Resumo:
An attempt is made to draw a line of demarcation between small orifices and large orifices. It is proposed that an orifice can be considered 'small' if the discharge through it calculated on the small-orifice assumption differs from the exact discharge by less than half of one per cent. Using this criterion, it is shown that a circular or elliptic orifice can be deemed 'small' as long as the ratio of the depth of the orifice to the head causing the flow (measured from the center of the orifice to the liquid surface) is less than 0.8; a rectangular orifice can be deemed 'small' if the ratio is less than 0.7. A correction factor is suggested for the coefficient of discharge to account for the deviation from the exact discharge.
Resumo:
By using the same current-time (I-t) curves, electrochemical kinetic parameters are determined by two methods, (a) using the ratio of current at a given potential to the diffusion-controlled limiting current and (b) curve fitting method, for the reduction of Cu(II)–CyDTA complex. The analysis by the method (a) shows that the rate determining step involves only one electron although the overall reduction of the complex involves two electrons suggesting thereby the stepwise reduction of the complex. The nature of I-t curves suggests the adsorption of intermediate species at the electrode surface. Under these circumstances more reliable kinetic parameters can be obtained by the method (a) compared to that of (b). Similar observations are found in the case of reduction of Cu(II)–EDTA complex.
Resumo:
Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.
Resumo:
Objective To develop the DCDDaily, an instrument for objective and standardized clinical assessment of capacity in activities of daily living (ADL) in children with developmental coordination disorder (DCD), and to investigate its usability, reliability, and validity. Subjects Five to eight-year-old children with and without DCD. Main measures The DCDDaily was developed based on thorough review of the literature and extensive expert involvement. To investigate the usability (assessment time and feasibility), reliability (internal consistency and repeatability), and validity (concurrent and discriminant validity) of the DCDDaily, children were assessed with the DCDDaily and the Movement Assessment Battery for Children-2 Test, and their parents filled in the Movement Assessment Battery for Children-2 Checklist and Developmental Coordination Disorder Questionnaire. Results 459 children were assessed (DCD group, n = 55; normative reference group, n = 404). Assessment was possible within 30 minutes and in any clinical setting. For internal consistency, Cronbach’s α = 0.83. Intraclass correlation = 0.87 for test–retest reliability and 0.89 for inter-rater reliability. Concurrent correlations with Movement Assessment Battery for Children-2 Test and questionnaires were ρ = −0.494, 0.239, and −0.284, p < 0.001. Discriminant validity measures showed significantly worse performance in the DCD group than in the control group (mean (SD) score 33 (5.6) versus 26 (4.3), p < 0.001). The area under curve characteristic = 0.872, sensitivity and specificity were 80%. Conclusions The DCDDaily is a valid and reliable instrument for clinical assessment of capacity in ADL, that is feasible for use in clinical practice.
Resumo:
In 1956 Whitham gave a nonlinear theory for computing the intensity of an acoustic pulse of an arbitrary shape. The theory has been used very successfully in computing the intensity of the sonic bang produced by a supersonic plane. [4.] derived an approximate quasi-linear equation for the propagation of a short wave in a compressible medium. These two methods are essentially nonlinear approximations of the perturbation equations of the system of gas-dynamic equations in the neighborhood of a bicharacteristic curve (or rays) for weak unsteady disturbances superimposed on a given steady solution. In this paper we have derived an approximate quasi-linear equation which is an approximation of perturbation equations in the neighborhood of a bicharacteristic curve for a weak pulse governed by a general system of first order quasi-linear partial differential equations in m + 1 independent variables (t, x1,…, xm) and derived Gubkin's result as a particular case when the system of equations consists of the equations of an unsteady motion of a compressible gas. We have also discussed the form of the approximate equation describing the waves propagating upsteam in an arbitrary multidimensional transonic flow.
Resumo:
An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called ‘Meltdown’, performs a number of data remediation steps before classifying melt curves and estimating melting temperatures. The final output is a report that summarizes the results of a differential scanning fluorimetry experiment. Meltdown uses a Bayesian classification scheme, enabling reproducible identification of various trends commonly found in DSF datasets. The goal of Meltdown is not to replace human analysis of the raw data, but to provide a sensible interpretation of the data to make this useful experimental technique accessible to naïve users, as well as providing a starting point for detailed analyses by more experienced users.
Resumo:
The potential energy curve of the He2+2 system dissociating into two He+ ions is examined in terms of the electronic force exerted on each nucleus as a function of the internuclear separation. The results are compared with the process of bond-formation in H2 from the separated atoms.
Resumo:
The relationship between age and turnout has been curve-linear as electoral participation first increases with age, remains relatively stable throughout middle-age and then gradually declines as certain physical infirmities set in (see e.g. Milbrath 1965). Alongside this life-cycle effect in voting, recent pooled cross-sectional analyses (see e.g. Blais et al. 2004; Lyons and Alexander 2000) have shown that there is also a generational effect, referring to lasting differences in turnout between various age groups. This study firstly examines the extent to which the generational effect applies in the Finnish context. Secondly, it investigates the factors accounting for that effect. The first article, based on individual-level register data from the parliamentary elections of 1999, shows that turnout differences between the different age groups would be even larger if there were no differences in social class and education. The second article examines simultaneously the effects of age, generation and period in the Finnish parliamentary elections of 1975-2003 based on pooled data from Finnish voter barometers (N = 8,634). The results show that there is a clear life cycle, generational and period effect. The third article examines the role of political socialisation in accounting for generational differences in electoral participation. Political socialisation is defined as the learning process in which an individual adopts various values, political attitudes, and patterns of actions from his or her environment. The multivariate analysis, based on the Finnish national election study 2003 (N=1,270), indicated that if there were no differences in socialisation between the youngest and the older generations, the difference in turnout would be much larger than if only sex and socioeconomic factors are controlled for. The fourth article examines other possible factors related to generational effect in voting. The results mainly apply to the Finnish parliamentary elections of 2003 in which we have data available. The results show that the sense of duty by far accounts for the generational effect in voting. Political interest, political knowledge and non-parliamentary participation also narrowed the differences in electoral participation between the youngest and the second youngest generations. The implication of the findings is that the lower turnout among the current youth is not a passing phenomenon that will diminish with age. Considering voting a civic duty and understanding the meaning of collective action are both associated with the process of political socialisation which therefore has an important role concerning the generational effect in turnout.
Resumo:
The present study examines empirically the inflation dynamics of the euro area. The focus of the analysis is on the role of expectations in the inflation process. In six articles we relax rationality assumption and proxy expectations directly using OECD forecasts or Consensus Economics survey data. In the first four articles we estimate alternative Phillips curve specifications and find evidence that inflation cannot instantaneously adjust to changes in expectations. A possible departure of expectations from rationality seems not to be powerful enough to totally explain the persistence of euro area inflation in the New Keynesian framework. When expectations are measured directly, the purely forward-looking New Keynesian Phillips curve is outperformed by the hybrid Phillips curve with an additional lagged inflation term and the New Classical Phillips curve with a lagged expectations term. The results suggest that the euro area inflation process has become more forward-looking in the recent years of low and stable inflation. Moreover, in low inflation countries, the inflation dynamics have been more forward-looking already since the late 1970s. We find evidence of substantial heterogeneity of inflation dynamics across the euro area countries. Real time data analysis suggests that in the euro area real time information matters most in the expectations term in the Phillips curve and that the balance of expectations formation is more forward- than backward-looking. Vector autoregressive (VAR) models of actual inflation, inflation expectations and the output gap are estimated in the last two articles.The VAR analysis indicates that inflation expectations, which are relatively persistent, have a significant effect on output. However,expectations seem to react to changes in both output and actual inflation, especially in the medium term. Overall, this study suggests that expectations play a central role in inflation dynamics, which should be taken into account in conducting monetary policy.
Resumo:
We model the shape and density profile of the dark matter halo of the low surface brightness, superthin galaxy UGC 7321, using the observed rotation curve and the H i scale height data as simultaneous constraints. We treat the galaxy as a gravitationally coupled system of stars and gas, responding to the gravitational potential of the dark matter halo. An isothermal halo of spherical shape with a core density in the range of View the MathML source and a core radius between 2.5 and 2.9 kpc, gives the best fit to the observations for a range of realistic gas parameters assumed. We find that the best-fit core radius is only slightly higher than the stellar disc scale length (2.1 kpc), unlike the case of the high surface brightness galaxies where the halo core radius is typically 3–4 times the disc scale length of the stars. Thus our model shows that the dark matter halo dominates the dynamics of the low surface brightness, superthin galaxy UGC 7321 at all radii, including the inner parts of the galaxy.
Resumo:
We have observed the exchange spring behavior in the soft (Fe3O4)-hard (BaCa2Fe16O27)-ferrite composite by tailoring the particle size of the individual phases and by suitable thermal treatment of the composite. The magnetization curve for the nanocomposite heated at 800 degrees C shows a single loop hysteresis showing the existence of the exchange spring phenomena in the composite and an enhancement of 13% in (BH)(max) compared to the parent hard ferrite (BaCa2Fe16O27). The Henkel plot provides the proof of the presence of the exchange interaction between the soft and hard grains as well as its dominance over the dipolar interaction in the nanocomposite.