43 resultados para smooth


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concerns the dynamics of nanoparticle impacts on solid surfaces. These impacts occur, for instance, in space, where micro- and nanometeoroids hit surfaces of planets, moons, and spacecraft. On Earth, materials are bombarded with nanoparticles in cluster ion beam devices, in order to clean or smooth their surfaces, or to analyse their elemental composition. In both cases, the result depends on the combined effects of countless single impacts. However, the dynamics of single impacts must be understood before the overall effects of nanoparticle radiation can be modelled. In addition to applications, nanoparticle impacts are also important to basic research in the nanoscience field, because the impacts provide an excellent case to test the applicability of atomic-level interaction models to very dynamic conditions. In this thesis, the stopping of nanoparticles in matter is explored using classical molecular dynamics computer simulations. The materials investigated are gold, silicon, and silica. Impacts on silicon through a native oxide layer and formation of complex craters are also simulated. Nanoparticles up to a diameter of 20 nm (315000 atoms) were used as projectiles. The molecular dynamics method and interatomic potentials for silicon and gold are examined in this thesis. It is shown that the displacement cascade expansionmechanism and crater crown formation are very sensitive to the choice of atomic interaction model. However, the best of the current interatomic models can be utilized in nanoparticle impact simulation, if caution is exercised. The stopping of monatomic ions in matter is understood very well nowadays. However, interactions become very complex when several atoms impact on a surface simultaneously and within a short distance, as happens in a nanoparticle impact. A high energy density is deposited in a relatively small volume, which induces ejection of material and formation of a crater. Very high yields of excavated material are observed experimentally. In addition, the yields scale nonlinearly with the cluster size and impact energy at small cluster sizes, whereas in macroscopic hypervelocity impacts, the scaling 2 is linear. The aim of this thesis is to explore the atomistic mechanisms behind the nonlinear scaling at small cluster sizes. It is shown here that the nonlinear scaling of ejected material yield disappears at large impactor sizes because the stopping mechanism of nanoparticles gradually changes to the same mechanism as in macroscopic hypervelocity impacts. The high yields at small impactor size are due to the early escape of energetic atoms from the hot region. In addition, the sputtering yield is shown to depend very much on the spatial initial energy and momentum distributions that the nanoparticle induces in the material in the first phase of the impact. At the later phases, the ejection of material occurs by several mechanisms. The most important mechanism at high energies or at large cluster sizes is atomic cluster ejection from the transient liquid crown that surrounds the crater. The cluster impact dynamics detected in the simulations are in agreement with several recent experimental results. In addition, it is shown that relatively weak impacts can induce modifications on the surface of an amorphous target over a larger area than was previously expected. This is a probable explanation for the formation of the complex crater shapes observed on these surfaces with atomic force microscopy. Clusters that consist of hundreds of thousands of atoms induce long-range modifications in crystalline gold.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The endoplasmic reticulum (ER) and the Golgi apparatus are organelles that produce, modify and transport proteins and lipids and regulate Ca2+ environment within cells. Structurally they are composed of sheets and tubules. Sheets may take various forms: intact, fenestrated, single or stacked. The ER, including the nuclear envelope, is a single continuous network, while the Golgi shows only some level of connectivity. It is often unclear, how different morphologies correspond to particular functions. Previous studies indicate that the structures of the ER and Golgi are dynamic and regulated by fusion and fission events, cytoskeleton, rate of protein synthesis and secretion, and specific structural proteins. For example, many structural proteins shaping tubular ER have been identified, but sheet formation is much more unclear. In this study, we used light and electron microscopy to study morphological changes of the ER and Golgi in mammalian cells. The proportion, type, location and dynamics of ER sheets and tubules were found to vary in a cell type or cell cycle stage dependent manner. During interphase, ER and Golgi structures were demonstrated to be regulated by p37, a cofactor of the fusion factor p97, and microtubules, which also affected the localization of the organelles. Like previously shown for the Golgi, the ER displayed a tendency for fenestration and tubulation during mitosis. However, this shape change did not result in ER fragmentation as happens to Golgi, but a continuous network was retained. The activity of p97/p37 was found to be important for the reassembly of both organelles after mitosis. In EM images, ER sheet membranes appear rough, since they contain attached ribosomes, whereas tubular membranes appear smooth. Our studies revealed that structural changes of the ER towards fenestrated and tubular direction correlate with loss of ER-bound ribosomes and vice versa. High and low curvature ER membranes have a low and high density of ribosomes, respectively. To conclude, both ER and Golgi architecture depend on fusion activity of p97/p37. ER morphogenesis, particularly of the sheet shape, is intimately linked to the density of membrane bound ribosomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tumorigenesis is a consequence of inactivating mutations of tumor suppressor genes and activating mutations of proto-oncogenes. Most of the mutations compromise cell autonomous and non-autonomous restrains on cell proliferation by modulating kinase signal transduction pathways. LKB1 is a tumor suppressor kinase whose sporadic mutations are frequently found in non-small cell lung cancer and cervical cancer. Germ-line mutations in the LKB1 gene lead to Peutz-Jeghers syndrome with an increased risk of cancer and development of benign gastrointestinal hamartomatous polyps consisting of hyperproliferative epithelia and prominent stromal stalk composed of smooth muscle cell lineage cells. The tumor suppressive function of LKB1 is possibly mediated by 14 identified LKB1 substrate kinases, whose activation is dependent on the LKB1 kinase complex. The aim of my thesis was to identify cell signaling pathways crucial for tumor suppression by LKB1. Re-introduction of LKB1 expression in the melanoma cell line G361 induces cell cycle arrest. Here we demonstrated that restoring the cytoplasmic LKB1 was sufficient to induce the cell cycle arrest in a tumor suppressor p53 dependent manner. To address the role of LKB1 in gastrointestinal tumor suppression, Lkb1 was deleted specifically in SMC lineage in vivo, which was sufficient to cause Peutz-Jeghers syndrome type polyposis. Studies on primary myofibroblasts lacking Lkb1 suggest that the regulation of TGFβ signaling, actin stress fibers and smooth muscle cell lineage differentiation are candidate mechanisms for tumor suppression by LKB1 in the gastrointestinal stroma. Further studies with LKB1 substrate kinase NUAK2 in HeLa cells indicate that NUAK2 is part of a positive feedback loop by which NUAK2 expression promotes actin stress fiber formation and, reciprocally the induction of actin stress fibers promote NUAK2 expression. Findings in this thesis suggest that p53 and TGFβ signaling pathways are potential mediators of tumor suppression by LKB1. An indication of NUAK2 in the promotion of actin stress fibers suggests that NUAK2 is one possible mediator of LKB1 dependent TGFβ signaling and smooth muscle cell lineage differentiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complications of atherosclerosis such as myocardial infarction and stroke are the primary cause of death in Western societies. The development of atherosclerotic lesions is a complex process, including endothelial cell dysfunction, inflammation, extracellular matrix alteration and vascular smooth muscle cell (VSMC) proliferation and migration. Various cell cycle regulatory proteins control VSMC proliferation. Protein kinases called cyclin dependent kinases (CDKs) play a major role in regulation of cell cycle progression. At specific phases of the cell cycle, CDKs pair with cyclins to become catalytically active and phosphorylate numerous substrates contributing to cell cycle progression. CDKs are also regulated by cyclin dependent kinase inhibitors, activating and inhibitory phosphorylation, proteolysis and transcription factors. This tight regulation of cell cycle is essential; thus its deregulation is connected to the development of cancer and other proliferative disorders such as atherosclerosis and restenosis as well as neurodegenerative diseases. Proteins of the cell cycle provide potential and attractive targets for drug development. Consequently, various low molecular weight CDK inhibitors have been identified and are in clinical development. Tylophorine is a phenanthroindolizidine alkaloid, which has been shown to inhibit the growth of several human cancer cell lines. It was used in Ayurvedic medicine to treat inflammatory disorders. The aim of this study was to investigate the effect of tylophorine on human umbilical vein smooth muscle cell (HUVSMC) proliferation, cell cycle progression and the expression of various cell cycle regulatory proteins in order to confirm the findings made with tylophorine in rat cells. We used several methods to determine our hypothesis, including cell proliferation assay, western blot and flow cytometric cell cycle distribution analysis. We demonstrated by cell proliferation assay that tylophorine inhibits HUVSMC proliferation dose-dependently with an IC50 value of 164 nM ± 50. Western blot analysis was used to determine the effect of tylophorine on expression of cell cycle regulatory proteins. Tylophorine downregulates cyclin D1 and p21 expression levels. The results of tylophorine’s effect on phosphorylation sites of p53 were not consistent. More sensitive methods are required in order to completely determine this effect. We used flow cytometric cell cycle analysis to investigate whether tylophorine interferes with cell cycle progression and arrests cells in a specific cell cycle phase. Tylophorine was shown to induce the accumulation of asynchronized HUVSMCs in S phase. Tylophorine has a significant effect on cell cycle, but its role as cell cycle regulator in treatment of vascular proliferative diseases and cancer requires more experiments in vitro and in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hamiltonian systems in stellar and planetary dynamics are typically near integrable. For example, Solar System planets are almost in two-body orbits, and in simulations of the Galaxy, the orbits of stars seem regular. For such systems, sophisticated numerical methods can be developed through integrable approximations. Following this theme, we discuss three distinct problems. We start by considering numerical integration techniques for planetary systems. Perturbation methods (that utilize the integrability of the two-body motion) are preferred over conventional "blind" integration schemes. We introduce perturbation methods formulated with Cartesian variables. In our numerical comparisons, these are superior to their conventional counterparts, but, by definition, lack the energy-preserving properties of symplectic integrators. However, they are exceptionally well suited for relatively short-term integrations in which moderately high positional accuracy is required. The next exercise falls into the category of stability questions in solar systems. Traditionally, the interest has been on the orbital stability of planets, which have been quantified, e.g., by Liapunov exponents. We offer a complementary aspect by considering the protective effect that massive gas giants, like Jupiter, can offer to Earth-like planets inside the habitable zone of a planetary system. Our method produces a single quantity, called the escape rate, which characterizes the system of giant planets. We obtain some interesting results by computing escape rates for the Solar System. Galaxy modelling is our third and final topic. Because of the sheer number of stars (about 10^11 in Milky Way) galaxies are often modelled as smooth potentials hosting distributions of stars. Unfortunately, only a handful of suitable potentials are integrable (harmonic oscillator, isochrone and Stäckel potential). This severely limits the possibilities of finding an integrable approximation for an observed galaxy. A solution to this problem is torus construction; a method for numerically creating a foliation of invariant phase-space tori corresponding to a given target Hamiltonian. Canonically, the invariant tori are constructed by deforming the tori of some existing integrable toy Hamiltonian. Our contribution is to demonstrate how this can be accomplished by using a Stäckel toy Hamiltonian in ellipsoidal coordinates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The majority of Internet traffic use Transmission Control Protocol (TCP) as the transport level protocol. It provides a reliable ordered byte stream for the applications. However, applications such as live video streaming place an emphasis on timeliness over reliability. Also a smooth sending rate can be desirable over sharp changes in the sending rate. For these applications TCP is not necessarily suitable. Rate control attempts to address the demands of these applications. An important design feature in all rate control mechanisms is TCP friendliness. We should not negatively impact TCP performance since it is still the dominant protocol. Rate Control mechanisms are classified into two different mechanisms: window-based mechanisms and rate-based mechanisms. Window-based mechanisms increase their sending rate after a successful transfer of a window of packets similar to TCP. They typically decrease their sending rate sharply after a packet loss. Rate-based solutions control their sending rate in some other way. A large subset of rate-based solutions are called equation-based solutions. Equation-based solutions have a control equation which provides an allowed sending rate. Typically these rate-based solutions react slower to both packet losses and increases in available bandwidth making their sending rate smoother than that of window-based solutions. This report contains a survey of rate control mechanisms and a discussion of their relative strengths and weaknesses. A section is dedicated to a discussion on the enhancements in wireless environments. Another topic in the report is bandwidth estimation. Bandwidth estimation is divided into capacity estimation and available bandwidth estimation. We describe techniques that enable the calculation of a fair sending rate that can be used to create novel rate control mechanisms.