991 resultados para linking number
Resumo:
The Hadwiger number eta(G) of a graph G is the largest integer n for which the complete graph K-n on n vertices is a minor of G. Hadwiger conjectured that for every graph G, eta(G) >= chi(G), where chi(G) is the chromatic number of G. In this paper, we study the Hadwiger number of the Cartesian product G square H of graphs. As the main result of this paper, we prove that eta(G(1) square G(2)) >= h root 1 (1 - o(1)) for any two graphs G(1) and G(2) with eta(G(1)) = h and eta(G(2)) = l. We show that the above lower bound is asymptotically best possible when h >= l. This asymptotically settles a question of Z. Miller (1978). As consequences of our main result, we show the following: 1. Let G be a connected graph. Let G = G(1) square G(2) square ... square G(k) be the ( unique) prime factorization of G. Then G satisfies Hadwiger's conjecture if k >= 2 log log chi(G) + c', where c' is a constant. This improves the 2 log chi(G) + 3 bound in [2] 2. Let G(1) and G(2) be two graphs such that chi(G1) >= chi(G2) >= clog(1.5)(chi(G(1))), where c is a constant. Then G1 square G2 satisfies Hadwiger's conjecture. 3. Hadwiger's conjecture is true for G(d) (Cartesian product of G taken d times) for every graph G and every d >= 2. This settles a question by Chandran and Sivadasan [2]. ( They had shown that the Hadiwger's conjecture is true for G(d) if d >= 3).
Resumo:
Mitochondrial diseases are caused by disturbances of the energy metabolism. The disorders range from severe childhood neurological diseases to muscle diseases of adults. Recently, mitochondrial dysfunction has also been found in Parkinson s disease, diabetes, certain types of cancer and premature aging. Mitochondria are the power plants of the cell but they also participate in the regulation of cell growth, signaling and cell death. Mitochondria have their own genetic material, mtDNA, which contains the genetic instructions for cellular respiration. Single cell may host thousands of mitochondria and several mtDNA molecules may reside inside single mitochondrion. All proteins needed for mtDNA maintenance are, however, encoded by the nuclear genome, and therefore, mutations of the corresponding genes can also cause mitochondrial disease. We have here studied the function of mitochondrial helicase Twinkle. Our research group has previously identified nuclear Twinkle gene mutations underlying an inherited adult-onset disorder, progressive external ophthalmoplegia (PEO). Characteristic for the PEO disease is the accumulation of multiple mtDNA deletions in tissues such as the muscle and brain. In this study, we have shown that Twinkle helicase is essential for mtDNA maintenance and that it is capable of regulating mtDNA copy number. Our results support the role of Twinkle as the mtDNA replication helicase. No cure is available for mitochondrial disease. Good disease models are needed for studies of the cause of disease and its progression and for treatment trials. Such disease model, which replicates the key features of the PEO disease, has been generated in this study. The model allows for careful inspection of how Twinkle mutations lead to mtDNA deletions and further causes the PEO disease. This model will be utilized in a range of studies addressing the delay of the disease onset and progression and in subsequent treatment trials. In conclusion, in this thesis fundamental knowledge of the function of the mitochondrial helicase Twinkle was gained. In addition, the first model for adult-onset mitochondrial disease was generated.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
Diffusion in a composite slab consisting of a large number of layers provides an ideal prototype problem for developing and analysing two-scale modelling approaches for heterogeneous media. Numerous analytical techniques have been proposed for solving the transient diffusion equation in a one-dimensional composite slab consisting of an arbitrary number of layers. Most of these approaches, however, require the solution of a complex transcendental equation arising from a matrix determinant for the eigenvalues that is difficult to solve numerically for a large number of layers. To overcome this issue, in this paper, we present a semi-analytical method based on the Laplace transform and an orthogonal eigenfunction expansion. The proposed approach uses eigenvalues local to each layer that can be obtained either explicitly, or by solving simple transcendental equations. The semi-analytical solution is applicable to both perfect and imperfect contact at the interfaces between adjacent layers and either Dirichlet, Neumann or Robin boundary conditions at the ends of the slab. The solution approach is verified for several test cases and is shown to work well for a large number of layers. The work is concluded with an application to macroscopic modelling where the solution of a fine-scale multilayered medium consisting of two hundred layers is compared against an “up-scaled” variant of the same problem involving only ten layers.
Resumo:
Defects in mitochondrial DNA (mtDNA) maintenance cause a range of human diseases, including autosomal dominant progressive external ophthalmoplegia (adPEO). This study aimed to clarify the molecular background of adPEO. We discovered that deoxynucleoside triphosphate (dNTP) metabolism plays a crucial in mtDNA maintenance and were thus prompted to search for therapeutic strategies based on the modulation of cellular dNTP pools or mtDNA copy number. Human mtDNA is a 16.6 kb circular molecule present in hundreds to thousands of copies per cell. mtDNA is compacted into nucleoprotein clusters called nucleoids. mtDNA maintenance diseases result from defects in nuclear encoded proteins that maintain the mtDNA. These syndromes typically afflict highly differentiated, post-mitotic tissues such as muscle and nerve, but virtually any organ can be affected. adPEO is a disease where mtDNA molecules with large-scale deletions accumulate in patients tissues, particularly in skeletal muscle. Mutations in five nuclear genes, encoding the proteins ANT1, Twinkle, POLG, POLG2 and OPA1, have previously been shown to cause adPEO. Here, we studied a large North American pedigree with adPEO, and identified a novel heterozygous mutation in the gene RRM2B, which encodes the p53R2 subunit of the enzyme ribonucleotide reductase (RNR). RNR is the rate-limiting enzyme in dNTP biosynthesis, and is required both for nuclear and mitochondrial DNA replication. The mutation results in the expression of a truncated form of p53R2, which is likely to compete with the wild-type allele. A change in enzyme function leads to defective mtDNA replication due to altered dNTP pools. Therefore, RRM2B is a novel adPEO disease gene. The importance of adequate dNTP pools and RNR function for mtDNA maintenance has been established in many organisms. In yeast, induction of RNR has previously been shown to increase mtDNA copy number, and to rescue the phenotype caused by mutations in the yeast mtDNA polymerase. To further study the role of RNR in mammalian mtDNA maintenance, we used mice that broadly overexpress the RNR subunits Rrm1, Rrm2 or p53R2. Active RNR is a heterotetramer consisting of two large subunits (Rrm1) and two small subunits (either Rrm2 or p53R2). We also created bitransgenic mice that overexpress Rrm1 together with either Rrm2 or p53R2. In contrast to the previous findings in yeast, bitransgenic RNR overexpression led to mtDNA depletion in mouse skeletal muscle, without mtDNA deletions or point mutations. The mtDNA depletion was associated with imbalanced dNTP pools. Furthermore, the mRNA expression levels of Rrm1 and p53R2 were found to correlate with mtDNA copy number in two independent mouse models, suggesting nuclear-mitochondrial cross talk with regard to mtDNA copy number. We conclude that tight regulation of RNR is needed to prevent harmful alterations in the dNTP pool balance, which can lead to disordered mtDNA maintenance. Increasing the copy number of wild-type mtDNA has been suggested as a strategy for treating PEO and other mitochondrial diseases. Only two proteins are known to cause a robust increase in mtDNA copy number when overexpressed in mice; the mitochondrial transcription factor A (TFAM), and the mitochondrial replicative helicase Twinkle. We studied the mechanisms by which Twinkle and TFAM elevate mtDNA levels, and showed that Twinkle specifically implements mtDNA synthesis. Furthermore, both Twinkle and TFAM were found to increase mtDNA content per nucleoid. Increased mtDNA content in mouse tissues correlated with an age-related accumulation of mtDNA deletions, depletion of mitochondrial transcripts, and progressive respiratory dysfunction. Simultaneous overexpression of Twinkle and TFAM led to a further increase in the mtDNA content of nucleoids, and aggravated the respiratory deficiency. These results suggested that high mtDNA levels have detrimental long-term effects in mice. These data have to be considered when developing and evaluating treatment strategies for elevating mtDNA copy number.
Resumo:
Protein modification via enzymatic cross-linking is an attractive way for altering food structure so as to create products with increased quality and nutritional value. These modifications are expected to affect not only the structure and physico-chemical properties of proteins but also their physiological characteristics, such as digestibility in the GI-tract and allergenicity. Protein cross-linking enzymes such as transglutaminases are currently commercially available, but also other types of cross-linking enzymes are being explored intensively. In this study, enzymatic cross-linking of β-casein, the most abundant bovine milk protein, was studied. Enzymatic cross-linking reactions were performed by fungal Trichoderma reesei tyrosinase (TrTyr) and the performance of the enzyme was compared to that of transglutaminase from Streptoverticillium mobaraense (Tgase). Enzymatic cross-linking reactions were followed by different analytical techniques, such as size exclusion chromatography -Ultra violet/Visible multi angle light scattering (SEC-UV/Vis-MALLS), phosphorus nuclear magnetic resonance spectroscopy (31P-NMR), atomic force (AFM) and matrix-assisted laser desorption/ionisation-time of flight mass spectrometry (MALDI-TOF MS). The research results showed that in both cases cross-linking of β-casein resulted in the formation of high molecular mass (MM ca. 1 350 kg mol-1), disk-shaped nanoparticles when the highest enzyme dosage and longest incubation times were used. According to SEC-UV/Vis-MALLS data, commercial β-casein was cross-linked almost completely when TrTyr and Tgase were used as cross-linking enzymes. In the case of TrTyr, high degree of cross-linking was confirmed by 31P-NMR where it was shown that 91 % of the tyrosine side-chains were involved in the cross-linking. The impact of enzymatic cross-linking of β-casein on in vitro digestibility by pepsin was followed by various analytical techniques. The research results demonstrated that enzymatically cross-linked β-casein was stable under the acidic conditions present in the stomach. Furthermore, it was found that cross-linked β-casein was more resistant to pepsin digestion when compared to that of non modified β-casein. The effects of enzymatic cross-linking of β-casein on allergenicity were also studied by different biochemical test methods. On the basis of the research results, enzymatic cross-linking decreased allergenicity of native β-casein by 14 % when cross-linked by TrTyr and by 6 % after treatment by Tgase. It can be concluded that in addition to the basic understanding of the reaction mechanism of TrTyr on protein matrix, the research results obtained in this study can have high impact on various applications like food, cosmetic, medical, textile and packing sectors.
Resumo:
Service researchers and practitioners have repeatedly claimed that customer service experiences are essential to all businesses. Therefore comprehension of how service experience is characterised in research is an essential element for its further development through research. The importance of greater in-depth understanding of the phenomenon of service experience has been acknowledged by several researchers, such as Carú and Cova and Vargo and Lusch. Furthermore, Service-Dominant (S-D) logic has integrated service experience to value by emphasising in its foundational premises that value is phenomenologically (experientially) determined. The present study analyses how the concept of service experience has been characterised in previous research. As such, it puts forward three ways to characterise it in relation to that research: 1) phenomenological service experience relates to the value discussion in S-D logic and interpretative consumer research, 2) process-based service experience relates to understanding service as a process, and 3) outcome-based service experience relates to understanding service experience as one element in models linking a number of variables or attributes to various outcomes. Focusing on the phenomenological service experience, the theoretical purpose of the study is to characterise service experience based on the phenomenological approach. In order to do so, an additional methodological purpose was formulated: to find a suitable methodology for analysing service experience based on the phenomenological approach. The study relates phenomenology to a philosophical Husserlian and social constructionist tradition studying phenomena as they appear in our experience in a social context. The study introduces Event-Based Narrative Inquiry Technique (EBNIT), which combines critical events with narratives and metaphors. EBNIT enabled the analysis of lived and imaginary service experiences as expressed in individual narratives. The study presents findings of eight case studies within service innovation of Web 2.0, mobile service, location aware service and public service in the municipal sector. Customers’ and service managers’ stories about their lived private and working lifeworld were the foundation for their ideal service experiences. In general, the thesis finds that service experiences are (1) subjective, (2) context-specific, (3) cumulative, (4) partially socially constructed, (5) both lived and imaginary, (6) temporally multiple-dimensional, and (7) iteratively related to perceived value. In addition to customer service experience, the thesis brings empirical evidence of managerial service experience of front-line managers experiencing the service they manage and develop in their working lifeworld. The study contributes to S-D logic, service innovation and service marketing and management in general by characterising service experience based on the phenomenological approach and integrating it to the value discussion. Additionally, the study offers a methodological approach for further exploration of service experiences. The study discusses managerial implications in conjunction with the case studies and discusses them in relation to service innovation.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.
Resumo:
In many problems of decision making under uncertainty the system has to acquire knowledge of its environment and learn the optimal decision through its experience. Such problems may also involve the system having to arrive at the globally optimal decision, when at each instant only a subset of the entire set of possible alternatives is available. These problems can be successfully modelled and analysed by learning automata. In this paper an estimator learning algorithm, which maintains estimates of the reward characteristics of the random environment, is presented for an automaton with changing number of actions. A learning automaton using the new scheme is shown to be e-optimal. The simulation results demonstrate the fast convergence properties of the new algorithm. The results of this study can be extended to the design of other types of estimator algorithms with good convergence properties.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.