956 resultados para Blocks bootstrap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is known that in an OFDM system using Hadamard transform or phase alteration before the IDFT operation can reduce the Peak-to-Average Power Ratio (PAPR). Both these techniques can be viewed as constellation precoding for PAPR reduction. In general, using non-diagonal transforms, like Hadamard transform, increases the ML decoding complexity. In this paper we propose the use of block-IDFT matrices and show that appropriate block-IDFT matrices give lower PAPR as well as lower decoding complexity compared to using Hadamard transform. Moreover, we present a detailed study of the tradeoff between PAPR reduction and the ML decoding complexity when using block-IDFT matrices with various sizes of the blocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When augmented with the longest common prefix (LCP) array and some other structures, the suffix array can solve many string processing problems in optimal time and space. A compressed representation of the LCP array is also one of the main building blocks in many compressed suffix tree proposals. In this paper, we describe a new compressed LCP representation: the sampled LCP array. We show that when used with a compressed suffix array (CSA), the sampled LCP array often offers better time/space trade-offs than the existing alternatives. We also show how to construct the compressed representations of the LCP array directly from a CSA

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and aims: Low stage and curative surgery are established factors for improved survival in gastric cancer. However, not all low-stage patients have a good prognosis. Cyclooxygenase-2 (COX-2) is known to associate with reduced survival in several cancers, and has been shown to play an important role in gastric carcinogenesis. Since new and better prognostic markers are needed for gastric cancer, we studied the prognostic significance of COX-2 and of markers that associate with COX-2 expression. We also studied markers reflecting proliferation and apoptosis, and evaluated their association with COX-2. Our purpose was to construct an accurate prognostic model by combining tissue markers and clinicopathogical factors. Materials and methods: Of 342 consecutive patients who underwent surgery for gastric cancer at Meilahti Hospital, Helsinki University Central Hospital, 337 were included in this study. Low stages I to II were represented by 141 (42%) patients, and high stages III to IV by 196 (58%). Curative surgery was performed on 176 (52%) patients. Survival data were obtained from the national registers. Slides from archive tissue blocks were prepared for immunohistochemistry by use of COX-2, human antigen R (HuR), cyclin A, matrix metalloproteinases 2 and 9 (MMP-2, MMP-9), and Ki-67 antibodies. Immunostainings were scored by microscopy, and scores were entered into a database. Associations of tumor markers with clinicopathological factors were calculated, as well as associations with p53, p21, and results of flow cytometry from earlier studies. Survival analysis was performed by the Kaplan-Meier method, and Cox multivariate models were reconstructed. Cell culture experiments were performed to explore the effect of small interfering (si)RNA of HuR on COX-2 expression in a TMK-1 gastric cancer cell line. Results: Overall 5-year survival was 35.1%. Study I showed that COX-2 was an independent prognostic factor, and that the prognostic impact of COX-2 was more pronounced in low-stage patients. Cytoplasmic HuR expression also associated with reduced survival in gastric cancer patients in a non-independent manner. Cell culture experiments showed that HuR can regulate COX-2 expression in TMK-1 cells in vitro, with an association also between COX-2 and HuR tissue expression in a clinical material. In Study II, cyclin A was an independent prognostic factor and was associated with HuR expression in the gastric cancer material. The results of Study III showed that epithelial MMP-2 associated with survival in univariate, but not in multivariate analysis. However, MMP-9 showed no prognostic value. MMP-2 expression was associated with COX-2 expression. In Study IV, the prognostic power of COX-2 was compared with that of all tested markers associated with survival in Studies I to III, as well as with p21, p53, and flow cytometry results. COX-2 and p53 were independent prognostic factors, and COX-2 expression was associated with that of p53 and Ki-67 and also with aneuploidy. Conclusions: COX-2 is an independent prognostic factor in gastric cancer, and its prognostic power emerges especially in low stage cancer. COX-2 is regulated by HuR, and is associated with factors reflecting invasion, proliferation, and apoptosis. In an extended multivariate model, COX-2 retained its position as an independent prognosticator. COX-2 can be considered a promising new prognostic marker in gastric cancer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Topics in Spatial Econometrics — With Applications to House Prices Spatial effects in data occur when geographical closeness of observations influences the relation between the observations. When two points on a map are close to each other, the observed values on a variable at those points tend to be similar. The further away the two points are from each other, the less similar the observed values tend to be. Recent technical developments, geographical information systems (GIS) and global positioning systems (GPS) have brought about a renewed interest in spatial matters. For instance, it is possible to observe the exact location of an observation and combine it with other characteristics. Spatial econometrics integrates spatial aspects into econometric models and analysis. The thesis concentrates mainly on methodological issues, but the findings are illustrated by empirical studies on house price data. The thesis consists of an introductory chapter and four essays. The introductory chapter presents an overview of topics and problems in spatial econometrics. It discusses spatial effects, spatial weights matrices, especially k-nearest neighbours weights matrices, and various spatial econometric models, as well as estimation methods and inference. Further, the problem of omitted variables, a few computational and empirical aspects, the bootstrap procedure and the spatial J-test are presented. In addition, a discussion on hedonic house price models is included. In the first essay a comparison is made between spatial econometrics and time series analysis. By restricting the attention to unilateral spatial autoregressive processes, it is shown that a unilateral spatial autoregression, which enjoys similar properties as an autoregression with time series, can be defined. By an empirical study on house price data the second essay shows that it is possible to form coordinate-based, spatially autoregressive variables, which are at least to some extent able to replace the spatial structure in a spatial econometric model. In the third essay a strategy for specifying a k-nearest neighbours weights matrix by applying the spatial J-test is suggested, studied and demonstrated. In the final fourth essay the properties of the asymptotic spatial J-test are further examined. A simulation study shows that the spatial J-test can be used for distinguishing between general spatial models with different k-nearest neighbours weights matrices. A bootstrap spatial J-test is suggested to correct the size of the asymptotic test in small samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Customer value has been identified as “the reason” for customers to patronize a firm, and as one of the fundamental blocks that market exchanges build upon. Despite the importance of customer value, it is often poorly defined, or seems to refer to different phenomena. This dissertation contributes to current marketing literature by subjecting the value concept to a critical investigation, and by clarifying its conceptual foundation. Based on the literature review, it is proposed that customer value can be divided into two separate, but interrelated aspects: value creation processes, and value outcome determination. This means that on one hand, it is possible to examine those activities through which value is created, and on the other hand, investigate how customers determine the value outcomes they receive. The results further show that customers may determine value in four different ways: value as a benefit/sacrifice ratio, as experience outcomes, as means-end chains, and value as phenomenological. In value as benefit/sacrifice ratio, customers are expected to calculate the ratio between service benefits (e.g. ease of use) and sacrifices (e.g. price). In value as experience outcomes, customers are suggested to experience multiple value components, such as functional, emotional, or social value. Customer value as means-ends chains in turn models value in terms of the relationships between service characteristics, use value, and desirable ends (e.g. social acceptance). Finally, value as phenomenological proposes that value emerges from lived, holistic experiences. The empirical papers investigate customer value in e-services, including online health care and mobile services, and show how value in e-service stems from the process and content quality, use context, and the service combination that a customer uses. In conclusion, marketers should understand that different value definitions generate different types of understanding of customer value. In addition, it is clear that studying value from several perspectives is useful, as it enables a richer understanding of value for the different actors. Finally, the interconnectedness between value creation and determination is surprisingly little researched, and this dissertation proposes initial steps towards understanding the relationship between the two.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A focus on cooperative industrial business relationships has become increasingly important in studies of industrial relationships. If the relationships between companies are strong it is usually a sign that companies will cooperate for a longer time and that may affect companies’ competitive and financial strength positively. As a result the bonds between companies become more important. This is due to the fact that bonds are building blocks of relationships and thus affect the stability in the cooperation between companies. Bond strength affect relationship strength. A framework regarding how bonds develop and change in an industrial business relationship has been developed in the study. Episodes affect the bonds in the relationship strengthening or weakening the bonds in the relationship or preserving status quo. Routine or critical episodes may lead to the strengthening or weakening of bonds as well as the preservation of status quo. The method used for analyzing bond strength trying to grasp the nature and change of bonds was invented by systematically following the elements of the definitions of bonds. A system with tables was drawn up in order to find out if the bond was weak, of medium strength or strong. Bonds are important regulators of industrial business relationships. By influencing the bonds one may have possibilities to strengthen or weaken the business relationship. Strengthen the business relationship in order to increase business and revenue and weaken the relationship in order to terminate business where the revenue is low or where there may be other problems in the relationship. By measuring the strength of different bonds it can be possible to strengthen weak bonds in order to strengthen the relationship. By using bond management it is possible to strategically strengthen or weaken the bonds between the cooperating companies in order to strengthen the cooperation and tie the customer or supplier to the company or weaken the cooperation in order to terminate the relationship. The instrument for the management of bonds is to use the created bond audit in order to know which bonds resources should be focused on in order to increase or decrease their strength.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Separation of printed text blocks from the non-text areas, containing signatures, handwritten text, logos and other such symbols, is a necessary first step for an OCR involving printed text recognition. In the present work, we compare the efficacy of some feature-classifier combinations to carry out this separation task. We have selected length-nomalized horizontal projection profile (HPP) as the starting point of such a separation task. This is with the assumption that the printed text blocks contain lines of text which generate HPP's with some regularity. Such an assumption is demonstrated to be valid. Our features are the HPP and its two transformed versions, namely, eigen and Fisher profiles. Four well known classifiers, namely, Nearest neighbor, Linear discriminant function, SVM's and artificial neural networks have been considered and efficiency of the combination of these classifiers with the above features is compared. A sequential floating feature selection technique has been adopted to enhance the efficiency of this separation task. The results give an average accuracy of about 96.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extraction of text areas from the document images with complex content and layout is one of the challenging tasks. Few texture based techniques have already been proposed for extraction of such text blocks. Most of such techniques are greedy for computation time and hence are far from being realizable for real time implementation. In this work, we propose a modification to two of the existing texture based techniques to reduce the computation. This is accomplished with Harris corner detectors. The efficiency of these two textures based algorithms, one based on Gabor filters and other on log-polar wavelet signature, are compared. A combination of Gabor feature based texture classification performed on a smaller set of Harris corner detected points is observed to deliver the accuracy and efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Encoding protein 3D structures into 1D string using short structural prototypes or structural alphabets opens a new front for structure comparison and analysis. Using the well-documented 16 motifs of Protein Blocks (PBs) as structural alphabet, we have developed a methodology to compare protein structures that are encoded as sequences of PBs by aligning them using dynamic programming which uses a substitution matrix for PBs. This methodology is implemented in the applications available in Protein Block Expert (PBE) server. PBE addresses common issues in the field of protein structure analysis such as comparison of proteins structures and identification of protein structures in structural databanks that resemble a given structure. PBE-T provides facility to transform any PDB file into sequences of PBs. PBE-ALIGNc performs comparison of two protein structures based on the alignment of their corresponding PB sequences. PBE-ALIGNm is a facility for mining SCOP database for similar structures based on the alignment of PBs. Besides, PBE provides an interface to a database (PBE-SAdb) of preprocessed PB sequences from SCOP culled at 95% and of all-against-all pairwise PB alignments at family and superfamily levels. PBE server is freely available at http://bioinformatics.univ-reunion.fr/ PBE/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with using the bootstrap to obtain improved critical values for the error correction model (ECM) cointegration test in dynamic models. In the paper we investigate the effects of dynamic specification on the size and power of the ECM cointegration test with bootstrap critical values. The results from a Monte Carlo study show that the size of the bootstrap ECM cointegration test is close to the nominal significance level. We find that overspecification of the lag length results in a loss of power. Underspecification of the lag length results in size distortion. The performance of the bootstrap ECM cointegration test deteriorates if the correct lag length is not used in the ECM. The bootstrap ECM cointegration test is therefore not robust to model misspecification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study reports the details of the finite element analysis of eleven shear critical partially prestressed concrete T-beams having steel fibers over partial or full depth. Prestressed concrete T-beams having a shear span to depth ratio of 2.65 and 1.59 and failing in the shear have been analyzed Using 'ANSYS'. The 'ANSYS' model accounts for the nonlinear phenomenon, such as, bond-slip of longitudinal reinforcements, post-cracking tensile stiffness of the concrete, stress transfer across the cracked blocks of the concrete and load sustenance through the bridging of steel fibers at crack interlace. The concrete is modeled using 'SOLID65'-eight-node brick element, which is capable Of simulating the cracking and crushing behavior of brittle materials. The reinforcements such as deformed bars, prestressing wires and steel fibers have been modeled discretely Using 'LINK8' - 3D spar element. The slip between the reinforcement (rebar, fibers) and the concrete has been modeled using a 'COMBIN39'-non-linear spring element connecting the nodes of the 'LINK8' element representing the reinforcement and nodes of the 'SOLID65' elements representing the concrete. The 'ANSYS' model correctly predicted the diagonal tension failure and shear compression failure of prestressed concrete beams observed in the experiment. I-lie capability of the model to capture the critical crack regions, loads and deflections for various types Of shear failures ill prestressed concrete beam has been illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preparation of Rb-beta -alumina was realized by the gel-to-crystallite conversion method. Reaction of hydrated aluminum hydroxide gel with RbOH in ethanol medium gave rise to the Rb+-inserted pseudoboehmite precursor under wet chemical conditions. The thermal decomposition of the precursor yielded Rb-beta -alumina. The Rb2O:Al2O3 ratio of monophasic Rb-beta -alumina ranged from 1:10 to 1:22. The extended stability in the compositional range is due to the fact that the conduction planes containing Rb+ and O2- ions can have lower occupancy of Rb+ ions for larger sized alkali ions, permitting the steric separation of the adjoining spinel blocks. High-resolution electron microscopy revealed that the decreasing occupancy of alkali ions in the conduction plane is balanced by changing widths of spinel blocks arising from the shift of tetrahedral Al3+ ions to octahedral sites and an accompanying increase in stacking defects. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.