891 resultados para one sample location test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-precision measurement of the electrical resistance of nickel along its critical line, a first attempt of this kind, as a function of pressure to 47.5 kbar is reported. Our analysis yields the values of the critical exponents α=α’=-0.115±0.005 and the amplitude ratios ‖A/A’‖=1.17±0.07 and ‖D/D’‖=1.2±0.1. These values are in close agreement with those predicted by renormalization-group (RG) theory. Moreover, this investigation provides an unambiguous experimental verification to one of the key consequences of RG theory that the critical exponents and amplitudes ratios are insensitive to pressure variation in nickel, a Heisenberg ferromagnet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern-day economics is increasingly biased towards believing that institutions matter for growth, an argument that has been further enforced by the recent economic crisis. There is also a wide consensus on what these growth-promoting institutions should look like, and countries are periodically ranked depending on how their institutional structure compares with the best-practice institutions, mostly in place in the developing world. In this paper, it is argued that ”non-desirable” or “second-best” institutions can be beneficial for fostering investment and thus providing a starting point for sustained growth, and that what matters is the appropriateness of institutions to the economy’s distance to the frontier or current phase of development. Anecdotal evidence from Japan and South-Korea is used as a motivation for studying the subject and a model is presented to describe this phenomenon. In the model, the rigidity or non-rigidity of the institutions is described by entrepreneurial selection. It is assumed that entrepreneurs are the ones taking part in the imitation and innovation of technologies, and that decisions on whether or not their projects are refinanced comes from capitalists. The capitalists in turn have no entrepreneurial skills and act merely as financers of projects. The model has two periods, and two kinds of entrepreneurs: those with high skills and those with low skills. The society’s choice of whether an imitation or innovation – based strategy is chosen is modeled as the trade-off between refinancing a low-skill entrepreneur or investing in the selection of the entrepreneurs resulting in a larger fraction of high-skill entrepreneurs with the ability to innovate but less total investment. Finally, a real-world example from India is presented as an initial attempt to test the theory. The data from the example is not included in this paper. It is noted that the model may be lacking explanatory power due to difficulties in testing the predictions, but that this should not be seen as a reason to disregard the theory – the solution might lie in developing better tools, not better just better theories. The conclusion presented is that institutions do matter. There is no one-size-fits-all-solution when it comes to institutional arrangements in different countries, and developing countries should be given space to develop their own institutional structures that cater to their specific needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background When we are viewing natural scenes, every saccade abruptly changes both the mean luminance and the contrast structure falling on any given retinal location. Thus it would be useful if the two were independently encoded by the visual system, even when they change simultaneously. Recordings from single neurons in the cat visual system have suggested that contrast information may be quite independently represented in neural responses to simultaneous changes in contrast and luminance. Here we test to what extent this is true in human perception. Methodology/Principal Findings Small contrast stimuli were presented together with a 7-fold upward or downward step of mean luminance (between 185 and 1295 Td, corresponding to 14 and 98 cd/m2), either simultaneously or with various delays (50–800 ms). The perceived contrast of the target under the different conditions was measured with an adaptive staircase method. Over the contrast range 0.1–0.45, mainly subtractive attenuation was found. Perceived contrast decreased by 0.052±0.021 (N = 3) when target onset was simultaneous with the luminance increase. The attenuation subsided within 400 ms, and even faster after luminance decreases, where the effect was also smaller. The main results were robust against differences in target types and the size of the field over which luminance changed. Conclusions/Significance Perceived contrast is attenuated mainly by a subtractive term when coincident with a luminance change. The effect is of ecologically relevant magnitude and duration; in other words, strict contrast constancy must often fail during normal human visual behaviour. Still, the relative robustness of the contrast signal is remarkable in view of the limited dynamic response range of retinal cones. We propose a conceptual model for how early retinal signalling may allow this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene expression is one of the most critical factors influencing the phenotype of a cell. As a result of several technological advances, measuring gene expression levels has become one of the most common molecular biological measurements to study the behaviour of cells. The scientific community has produced enormous and constantly increasing collection of gene expression data from various human cells both from healthy and pathological conditions. However, while each of these studies is informative and enlighting in its own context and research setup, diverging methods and terminologies make it very challenging to integrate existing gene expression data to a more comprehensive view of human transcriptome function. On the other hand, bioinformatic science advances only through data integration and synthesis. The aim of this study was to develop biological and mathematical methods to overcome these challenges and to construct an integrated database of human transcriptome as well as to demonstrate its usage. Methods developed in this study can be divided in two distinct parts. First, the biological and medical annotation of the existing gene expression measurements needed to be encoded by systematic vocabularies. There was no single existing biomedical ontology or vocabulary suitable for this purpose. Thus, new annotation terminology was developed as a part of this work. Second part was to develop mathematical methods correcting the noise and systematic differences/errors in the data caused by various array generations. Additionally, there was a need to develop suitable computational methods for sample collection and archiving, unique sample identification, database structures, data retrieval and visualization. Bioinformatic methods were developed to analyze gene expression levels and putative functional associations of human genes by using the integrated gene expression data. Also a method to interpret individual gene expression profiles across all the healthy and pathological tissues of the reference database was developed. As a result of this work 9783 human gene expression samples measured by Affymetrix microarrays were integrated to form a unique human transcriptome resource GeneSapiens. This makes it possible to analyse expression levels of 17330 genes across 175 types of healthy and pathological human tissues. Application of this resource to interpret individual gene expression measurements allowed identification of tissue of origin with 92.0% accuracy among 44 healthy tissue types. Systematic analysis of transcriptional activity levels of 459 kinase genes was performed across 44 healthy and 55 pathological tissue types and a genome wide analysis of kinase gene co-expression networks was done. This analysis revealed biologically and medically interesting data on putative kinase gene functions in health and disease. Finally, we developed a method for alignment of gene expression profiles (AGEP) to perform analysis for individual patient samples to pinpoint gene- and pathway-specific changes in the test sample in relation to the reference transcriptome database. We also showed how large-scale gene expression data resources can be used to quantitatively characterize changes in the transcriptomic program of differentiating stem cells. Taken together, these studies indicate the power of systematic bioinformatic analyses to infer biological and medical insights from existing published datasets as well as to facilitate the interpretation of new molecular profiling data from individual patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vegetation maps and bioclimatic zone classifications communicate the vegetation of an area and are used to explain how the environment regulates the occurrence of plants on large scales. Many practises and methods for dividing the world’s vegetation into smaller entities have been presented. Climatic parameters, floristic characteristics, or edaphic features have been relied upon as decisive factors, and plant species have been used as indicators for vegetation types or zones. Systems depicting vegetation patterns that mainly reflect climatic variation are termed ‘bioclimatic’ vegetation maps. Based on these it has been judged logical to deduce that plants moved between corresponding bioclimatic areas should thrive in the target location, whereas plants moved from a different zone should languish. This principle is routinely applied in forestry and horticulture but actual tests of the validity of bioclimatic maps in this sense seem scanty. In this study I tested the Finnish bioclimatic vegetation zone system (BZS). Relying on the plant collection of Helsinki University Botanic Garden’s Kumpula collection, which according to the BZS is situated at the northern limit of the hemiboreal zone, I aimed to test how the plants’ survival depends on their provenance. My expectation was that plants from the hemiboreal or southern boreal zones should do best in Kumpula, whereas plants from more southern and more northern zones should show progressively lower survival probabilities. I estimated probability of survival using collection database information of plant accessions of known wild origin grown in Kumpula since the mid 1990s, and logistic regression models. The total number of accessions I included in the analyses was 494. Because of problems with some accessions I chose to separately analyse a subset of the complete data, which included 379 accessions. I also analysed different growth forms separately in order to identify differences in probability of survival due to different life strategies. In most analyses accessions of temperate and hemiarctic origin showed lower survival probability than those originating from any of the boreal subzones, which among them exhibited rather evenly high probabilities. Exceptionally mild and wet winters during the study period may have killed off hemiarctic plants. Some winters may have been too harsh for temperate accessions. Trees behaved differently: they showed an almost steadily increasing survival probability from temperate to northern boreal origins. Various factors that could not be controlled for may have affected the results, some of which were difficult to interpret. This was the case in particular with herbs, for which the reliability of the analysis suffered because of difficulties in managing their curatorial data. In all, the results gave some support to the BZS, and especially its hierarchical zonation. However, I question the validity of the formulation of the hypothesis I tested since it may not be entirely justified by the BZS, which was designed for intercontinental comparison of vegetation zones, but not specifically for transcontinental provenance trials. I conclude that botanic gardens should pay due attention to information management and curational practices to ensure the widest possible applicability of their plant collections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Questions of the small size of non-industrial private forest (NIPF) holdings in Finland are considered and factors affecting their partitioning are analyzed. This work arises out of Finnish forest policy statements in which the small average size of holdings has been seen to have a negative influence on the economics of forestry. A survey of the literature indicates that the size of holdings is an important factor determining the costs of logging and silvicultural operations, while its influence on the timber supply is slight. The empirical data are based on a sample of 314 holdings collected by interviewing forest owners in the years 1980-86. In 1990-91 the same holdings were resurveyed by means of a postal inquiry and partly by interviewing forest owners. The principal objective in compiling the data is to assist in quantifying ownership factors that influence partitioning among different kinds of NIPF holdings. Thus the mechanism of partitioning were described and a maximum likelihood logistic regression model was constructed using seven independent holding and ownership variables. One out of four holdings had undergone partitioning in conjunction with a change in ownership, one fifth among family owned holdings and nearly a half among jointly owned holdings. The results of the logistic regression model indicate, for instance, that the odds on partitioning is about three times greater for jointly owned holdings than for family owned ones. Also, the probabilities of partitioning were estimated and the impact of independent dichotomous variables on the probability of partitioning ranged between 0.02 and 0.10. The low value of the Hosmer-Lemeshow test statistic indicates a good fit of the model and the rate of correct classification was estimated to be 88 per cent with a cutoff point of 0.5. The average size of holdings undergoing ownership changes decreased from 29.9 ha to 28.7 ha over the approximate interval 1983-90. In addition, the transition probability matrix showed that the trends towards smaller size categories mostly involved in the small size categories, less than 20 ha. The results of the study can be used in considering the effects of the small size of holdings for forestry and if the purpose is to influence partitioning through forest or rural policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To enhance the utilization of the wood, the sawmills are forced to place more emphasis on planning to master the whole production chain from the forest to the end product. One significant obstacle to integrating the forest-sawmill-market production chain is the lack of appropriate information about forest stands. Since the wood procurement point of view in forest planning systems has been almost totally disregarded there has been a great need to develop an easy and efficient pre-harvest measurement method, allowing separate measurement of stands prior to harvesting. The main purpose of this study was to develop a measurement method for pine stands which forest managers could use in describing the properties of the standing trees for sawing production planning. Study materials were collected from ten Scots pine stands (Pinus sylvestris) located in North Häme and South Pohjanmaa, in southern Finland. The data comprise test sawing data on 314 pine stems, dbh and height measures of all trees and measures of the quality parameters of pine sawlog stems in all ten study stands as well as the locations of all trees in six stands. The study was divided into four sub-studies which deal with pine quality prediction, construction of diameter and dead branch height distributions, sampling designs and applying height and crown height models. The final proposal for the pre-harvest measurement method is a synthesis of the individual sub-studies. Quality analysis resulted in choosing dbh, distance from stump height to the first dead branch (dead branch height), crown height and tree height as the most appropriate quality characteristics of Scots pine. Dbh and dead branch height are measured from each pine sample tree while height and crown height are derived from dbh measures by aid of mixed height and crown height models. Pine and spruce diameter distribution as well as dead branch height distribution are most effectively predicted by the kernel function. Roughly 25 sample trees seems to be appropriate in pure pine stands. In mixed stands the number of sample trees needs to be increased in proportion to the intensity of pines in order to attain the same level of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For resonant column tests conducted in the flexure mode of excitation, a new methodology has been proposed to find the elastic modulus and associated axial strain of a cylindrical sample. The proposed method is an improvement over the existing one, and it does not require the assumption of either the mode shape or zero bending moment condition at the top of the sample. A stepwise procedure is given to perform the necessary calculations. From a number of resonant column experiments on aluminum bars and dry sand samples, it has been observed that the present method as compared with the one available in literature provides approximately (i) 5.9%-7.3% higher values of the elastic modulus and (ii) 6.5%-7.3% higher values of the associated axial strains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A local algorithm with local horizon r is a distributed algorithm that runs in r synchronous communication rounds; here r is a constant that does not depend on the size of the network. As a consequence, the output of a node in a local algorithm only depends on the input within r hops from the node. We give tight bounds on the local horizon for a class of local algorithms for combinatorial problems on unit-disk graphs (UDGs). Most of our bounds are due to a refined analysis of existing approaches, while others are obtained by suggesting new algorithms. The algorithms we consider are based on network decompositions guided by a rectangular tiling of the plane. The algorithms are applied to matching, independent set, graph colouring, vertex cover, and dominating set. We also study local algorithms on quasi-UDGs, which are a popular generalisation of UDGs, aimed at more realistic modelling of communication between the network nodes. Analysing the local algorithms on quasi-UDGs allows one to assume that the nodes know their coordinates only approximately, up to an additive error. Despite the localisation error, the quality of the solution to problems on quasi-UDGs remains the same as for the case of UDGs with perfect location awareness. We analyse the increase in the local horizon that comes along with moving from UDGs to quasi-UDGs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The radiative impact of aerosols is one of the largest sources of uncertainty in estimating anthropogenic climate perturbations. Here we have used independent ground-based radiometer measurements made simultaneously with comprehensive measurements of aerosol microphysical and optical properties at a highly populated urban site, Bangalore (13.02 degrees N, 77.6 degrees E) in southern India during a dedicated campaign during winter of 2004 and summer and pre-monsoon season of 2005. We have also used longer term measurements carried out at this site to present general features of aerosols over this region. The aerosol radiative impact assessments were made from direct measurements of ground reaching irradiance as well as by incorporating measured aerosol properties into a radiative transfer model. Large discrepancies were observed between measured and modeled (using radiative transfer models, which employed measured aerosol properties) radiative impacts. It appears that the presence of elevated aerosol layers and (or) inappropriate description of aerosol state of mixing are (is) responsible for the discrepancies. On a monthly scale reduction of surface irradiance due to the presence of aerosols (estimated using radiative flux measurements) varies from 30 to 65 W m(-2). The lowest values in surface radiative impact were observed during June when there is large reduction in aerosol as a consequence of monsoon rainfall. Large increase in aerosol-induced surface radiative impact was observed from winter to summer. Our investigations re-iterate the inadequacy of aerosol measurements at the surface alone and importance of representing column properties (using vertical profiles) accurately in order to assess aerosol-induced climate changes accurately. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The no-hiding theorem says that if any physical process leads to bleaching of quantum information from the original system, then it must reside in the rest of the Universe with no information being hidden in the correlation between these two subsystems. Here, we report an experimental test of the no-hiding theorem with the technique of nuclear magnetic resonance. We use the quantum state randomization of a qubit as one example of the bleaching process and show that the missing information can be fully recovered up to local unitary transformations in the ancilla qubits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An attempt has been made to study the effect of time and test procedure on the behaviour of partial discharge (PD) pulses causing failure of oil-pressboard system under power frequency voltages using circular disc shaped samples and uniform field electrodes. Weibull statistics have been used to handle the large amount of PD data. The PD phenomena has been found to be stress and time dependent. On the basis of stress level, three different regions are identified and in one of the regions, the rate of deterioration of the sample is at a maximum. The work presents some interesting features of Weibull parameters as related to the condition of insulation studied in addition to its usual PD characteristics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design and development of a comprehensive digital protection scheme for applications in 25 KV a.c railway traction system. The scheme provides distance protection, detection of wrong phase coupling both in the lagging and leading directions, high set instantaneous trip and PT fuse failure. Provision is also made to include fault location and disturbance recording. The digital relaying scheme has been tried on two types of hardware platforms, one with PC/AT based hardware and the other with a custom designed standalone 16-bit microcontroller based card. Compared to the existing scheme, the operating time is around one cycle and the relaying algorithm has been optimised to minimise the number of computations. The prototype has been rigorously tested in the laboratory using a specially designed PC based relay test bench and the results are highly satisfactory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present multifrequency Very Large Array (VLA) observations of two giant quasars, 0437-244 and 1025-229, from the Molonglo Complete Sample. These sources have well-defined FR II radio structure, possible one-sided jets, no significant depolarization between 1365 and 4935 MHz and low rotation measure (\ RM \ < 20 rad m(-2)). The giant sources are defined to be those with overall projected size greater than or equal to 1 Mpc. We have compiled a sample of about 50 known giant radio sources from the literature, and have compared some of their properties with a complete sample of 3CR radio sources of smaller sizes to investigate the evolution of giant sources, and test their consistency with the unified scheme for radio galaxies and quasars. We find an inverse correlation between the degree of core prominence and total radio luminosity, and show that the giant radio sources have similar core strengths to smaller sources of similar total luminosity. Hence their large sizes are unlikely to be caused by stronger nuclear activity. The degree of collinearity of the giant sources is also similar to that of the sample of smaller sources. The luminosity-size diagram shows that the giant sources are less luminous than our sample of smaller sized 3CR sources, consistent with evolutionary scenarios in which the giants have evolved from the smaller sources, losing energy as they expand to these large dimensions. For the smaller sources, radiative losses resulting from synchrotron radiation are more significant while for the giant sources the equipartition magnetic fields are smaller and inverse Compton lass owing to microwave background radiation is the dominant process. The radio properties of the giant radio galaxies and quasars are consistent with the unified scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.