960 resultados para Process mean shifts
Resumo:
The main purpose ofthis study was to examine the effect ofintention on the sleep onset process from an electrophysiological point ofview. To test this, two nap conditions, the Multiple Sleep Latency Test (MSLT) and the Repeated Test of Sustained Wakefulness (RTSW) were used to compare intentional and inadvertent sleep onset. Sixteen female participants (aged 19-25) spent two non-consecutive nights in the sleep lab; however, due to physical and technical difficulties only 8 participants produced compete sets of data for analysis. Each night participants were given six nap opportunities. For three ofthese naps they were instructed to fall asleep (MSLT), for the remaining three naps they were to attempt to remain awake (RTSW). These two types of nap opportunities represented the conditions ofintentional (MSLT) and inadvertent (RTSW) sleep onset. Several other sleepiness, performance, arousal and questionnaire measures were obtained to evaluate and/or control for demand characteristics, subjective effort and mental activity during the nap tests. The nap opportunities were scored using a new 9 stage scoring system developed by Hori et al. (1994). Power spectral analyses (FFT) were also performed on the sleep onset data provided by the two nap conditions. Longer sleep onset latencies (approximately 1.25 minutes) were obseIVed in the RTSW than the MSLT. A higher incidence of structured mental activity was reported in the RTSW and may have been reflected in higher Beta power during the RTSW. The decent into sleep was more ragged in the RTSW as evidenced by an increased number shifts towards higher arousal as measured using the Hori 9 stage sleep scoring method. 1ll The sleep onset process also appears to be altered by the intention to remain awake, at least until the point ofinitial Stage 2 sleep (i.e. the first appearance of spindle activity). When only examining the final 4.3 minutes ofthe sleep onset process (ending with spindle activity), there were significant interactions between the type ofnap and the time until sleep onset for Theta, Alpha and Beta power. That is to say, the pattern of spectral power measurements in these bands differed across time as a function ofthe type ofnap. The effect ofintention however, was quite small (,,2 < .04) when compared to the variance which could be accounted for by the passage oftime (,,2 == .10 to .59). These data indicate that intention alone cannot greatly extend voluntary wakefulness if a person is sleepy. This has serious implications for people who may be required to perform dangerous tasks while sleepy, particularly for people who are in a situation that does not allow them the opportunity to engage in behavioural strategies in order to maintain their arousal.
Resumo:
Teacher reflective practice is described as an effective method for engaging teachers in improving their own professional learning. Yet, some teachers do not understand how to effectively engage in the reflective processes, or prefer not to formalize the process through writing a reflective journal as taught in most teacher education programs. Developing reflective skills through the process of photography was investigated in this study as a strategy to allow enhanced teacher reflection for professional and personal growth. The process of photography is understood as the mindful act of photographing rather than focusing on the final product-the image. For this study, 3 practicing educators engaged in photographic exercises as a reflective process. Data sources included transcribed interviews, participant journal reflections, and sketchbook artifacts, as well as the researcher's personal journal notes. Findings indicated that, through the photographic process, (a) teacher participants developed new and individual strategies for professional leaming; and (b) teacher participants experienced shifts in the way they conceptualized their personal worldviews.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
Information is knowledge, facts or data. For the purpose of enabling the users to assimilate information, it should be repacked. Knowledge becomes information when it is externalized i.e. put in to the process of communication. The effectiveness of communication technology depends how well it provides its clients with information rapidly, economically and authentically. A large number of ICT enabled services including OPAC; e-resources etc. are available in the university library. Studies have been done to find the impact of ICT on different sections of CUSAT library by observing the activities of different sections; discussions with colleagues and visitors; and analyzing the entries in the library records. The results of the studies are presented here in the form of a paper.
Resumo:
The frequency of persistent atmospheric blocking events in the 40-yr ECMWF Re-Analysis (ERA-40) is compared with the blocking frequency produced by a simple first-order Markov model designed to predict the time evolution of a blocking index [defined by the meridional contrast of potential temperature on the 2-PVU surface (1 PVU ≡ 1 × 10−6 K m2 kg−1 s−1)]. With the observed spatial coherence built into the model, it is able to reproduce the main regions of blocking occurrence and the frequencies of sector blocking very well. This underlines the importance of the climatological background flow in determining the locations of high blocking occurrence as being the regions where the mean midlatitude meridional potential vorticity (PV) gradient is weak. However, when only persistent blocking episodes are considered, the model is unable to simulate the observed frequencies. It is proposed that this persistence beyond that given by a red noise model is due to the self-sustaining nature of the blocking phenomenon.
Resumo:
The formulation of a new process-based crop model, the general large-area model (GLAM) for annual crops is presented. The model has been designed to operate on spatial scales commensurate with those of global and regional climate models. It aims to simulate the impact of climate on crop yield. Procedures for model parameter determination and optimisation are described, and demonstrated for the prediction of groundnut (i.e. peanut; Arachis hypogaea L.) yields across India for the period 1966-1989. Optimal parameters (e.g. extinction coefficient, transpiration efficiency, rate of change of harvest index) were stable over space and time, provided the estimate of the yield technology trend was based on the full 24-year period. The model has two location-specific parameters, the planting date, and the yield gap parameter. The latter varies spatially and is determined by calibration. The optimal value varies slightly when different input data are used. The model was tested using a historical data set on a 2.5degrees x 2.5degrees grid to simulate yields. Three sites are examined in detail-grid cells from Gujarat in the west, Andhra Pradesh towards the south, and Uttar Pradesh in the north. Agreement between observed and modelled yield was variable, with correlation coefficients of 0.74, 0.42 and 0, respectively. Skill was highest where the climate signal was greatest, and correlations were comparable to or greater than correlations with seasonal mean rainfall. Yields from all 35 cells were aggregated to simulate all-India yield. The correlation coefficient between observed and simulated yields was 0.76, and the root mean square error was 8.4% of the mean yield. The model can be easily extended to any annual crop for the investigation of the impacts of climate variability (or change) on crop yield over large areas. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The commercial process in construction projects is an expensive and highly variable overhead. Collaborative working practices carry many benefits, which are widely disseminated, but little information is available about their costs. Transaction Cost Economics is a theoretical framework that seeks explanations for why there are firms and how the boundaries of firms are defined through the “make-or-buy” decision. However, it is not a framework that offers explanations for the relative costs of procuring construction projects in different ways. The idea that different methods of procurement will have characteristically different costs is tested by way of a survey. The relevance of transaction cost economics to the study of commercial costs in procurement is doubtful. The survey shows that collaborative working methods cost neither more nor less than traditional methods. But the benefits of collaboration mean that there is a great deal of enthusiasm for collaboration rather than competition.
OFDM joint data detection and phase noise cancellation based on minimum mean square prediction error
Resumo:
This paper proposes a new iterative algorithm for orthogonal frequency division multiplexing (OFDM) joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the relatively less studied problem of "overfitting" such that the iterative approach may converge to a trivial solution. Specifically, we apply a hard-decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the PHN, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical Simulations are also given to verify the proposed algorithm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper draws from a wider research programme in the UK undertaken for the Investment Property Forum examining liquidity in commercial property. One aspect of liquidity is the process by which transactions occur including both how properties are selected for sale and the time taken to transact. The paper analyses data from three organisations; a property company, a major financial institution and an asset management company, formally a major public sector pension fund. The data covers three market states and includes sales completed in 1995, 2000 and 2002 in the UK. The research interviewed key individuals within the three organisations to identify any common patterns of activity within the sale process and also identified the timing of 187 actual transactions from inception of the sale to completion. The research developed a taxonomy of the transaction process. Interviews with vendors indicated that decisions to sell were a product of a combination of portfolio, specific property and market based issues. Properties were generally not kept in a “readiness for sale” state. The average time from first decision to sell the actual property to completion had a mean time of 298 days and a median of 190 days. It is concluded that this study may underestimate the true length of the time to transact for two reasons. Firstly, the pre-marketing period is rarely recorded in transaction files. Secondly, and more fundamentally, studies of sold properties may contain selection bias. The research indicated that vendors tended to sell properties which it was perceived could be sold at a ‘fair’ price in a reasonable period of time.
Resumo:
This paper describes the novel use of cluster analysis in the field of industrial process control. The severe multivariable process problems encountered in manufacturing have often led to machine shutdowns, where the need for corrective actions arises in order to resume operation. Production faults which are caused by processes running in less efficient regions may be prevented or diagnosed using a reasoning based on cluster analysis. Indeed the intemal complexity of a production machinery may be depicted in clusters of multidimensional data points which characterise the manufacturing process. The application of a Mean-Tracking cluster algorithm (developed in Reading) to field data acquired from a high-speed machinery will be discussed. The objective of such an application is to illustrate how machine behaviour can be studied, in particular how regions of erroneous and stable running behaviour can be identified.
Resumo:
We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.
Resumo:
Atmospheric CO2 concentration is hypothesized to influence vegetation distribution via tree–grass competition, with higher CO2 concentrations favouring trees. The stable carbon isotope (δ13C) signature of vegetation is influenced by the relative importance of C4 plants (including most tropical grasses) and C3 plants (including nearly all trees), and the degree of stomatal closure – a response to aridity – in C3 plants. Compound-specific δ13C analyses of leaf-wax biomarkers in sediment cores of an offshore South Atlantic transect are used here as a record of vegetation changes in subequatorial Africa. These data suggest a large increase in C3 relative to C4 plant dominance after the Last Glacial Maximum. Using a process-based biogeography model that explicitly simulates 13C discrimination, it is shown that precipitation and temperature changes cannot explain the observed shift in δ13C values. The physiological effect of increasing CO2 concentration is decisive, altering the C3/C4 balance and bringing the simulated and observed δ13C values into line. It is concluded that CO2 concentration itself was a key agent of vegetation change in tropical southern Africa during the last glacial–interglacial transition. Two additional inferences follow. First, long-term variations in terrestrial δ13Cvalues are not simply a proxy for regional rainfall, as has sometimes been assumed. Although precipitation and temperature changes have had major effects on vegetation in many regions of the world during the period between the Last Glacial Maximum and recent times, CO2 effects must also be taken into account, especially when reconstructing changes in climate between glacial and interglacial states. Second, rising CO2 concentration today is likely to be influencing tree–grass competition in a similar way, and thus contributing to the "woody thickening" observed in savannas worldwide. This second inference points to the importance of experiments to determine how vegetation composition in savannas is likely to be influenced by the continuing rise of CO2 concentration.
Resumo:
Numerical experiments are described that pertain to the climate of a coupled atmosphere–ocean–ice system in the absence of land, driven by modern-day orbital and CO2 forcing. Millennial time-scale simulations yield a mean state in which ice caps reach down to 55° of latitude and both the atmosphere and ocean comprise eastward- and westward-flowing zonal jets, whose structure is set by their respective baroclinic instabilities. Despite the zonality of the ocean, it is remarkably efficient at transporting heat meridionally through the agency of Ekman transport and eddy-driven subduction. Indeed the partition of heat transport between the atmosphere and ocean is much the same as the present climate, with the ocean dominating in the Tropics and the atmosphere in the mid–high latitudes. Variability of the system is dominated by the coupling of annular modes in the atmosphere and ocean. Stochastic variability inherent to the atmospheric jets drives variability in the ocean. Zonal flows in the ocean exhibit decadal variability, which, remarkably, feeds back to the atmosphere, coloring the spectrum of annular variability. A simple stochastic model can capture the essence of the process. Finally, it is briefly reviewed how the aquaplanet can provide information about the processes that set the partition of heat transport and the climate of Earth.
Resumo:
We evaluate the ability of process based models to reproduce observed global mean sea-level change. When the models are forced by changes in natural and anthropogenic radiative forcing of the climate system and anthropogenic changes in land-water storage, the average of the modelled sea-level change for the periods 1900–2010, 1961–2010 and 1990–2010 is about 80%, 85% and 90% of the observed rise. The modelled rate of rise is over 1 mm yr−1 prior to 1950, decreases to less than 0.5 mm yr−1 in the 1960s, and increases to 3 mm yr−1 by 2000. When observed regional climate changes are used to drive a glacier model and an allowance is included for an ongoing adjustment of the ice sheets, the modelled sea-level rise is about 2 mm yr−1 prior to 1950, similar to the observations. The model results encompass the observed rise and the model average is within 20% of the observations, about 10% when the observed ice sheet contributions since 1993 are added, increasing confidence in future projections for the 21st century. The increased rate of rise since 1990 is not part of a natural cycle but a direct response to increased radiative forcing (both anthropogenic and natural), which will continue to grow with ongoing greenhouse gas emissions
Resumo:
Sudden stratospheric warmings (SSWs) are the most prominent vertical coupling process in the middle atmosphere, which occur during winter and are caused by the interaction of planetary waves (PWs) with the zonal mean flow. Vertical coupling has also been identified during the equinox transitions, and is similarly associated with PWs. We argue that there is a characteristic aspect of the autumn transition in northern high latitudes, which we call the “hiccup”, and which acts like a “mini SSW”, i.e. like a small minor warming. We study the average characteristics of the hiccup based on a superimposed epoch analysis using a nudged version of the Canadian Middle Atmosphere Model, representing 30 years of historical data. Hiccups can be identified in about half the years studied. The mesospheric zonal wind results are compared to radar observations over Andenes (69N,16E) for the years 2000–2013. A comparison of the average characteristics of hiccups and SSWs shows both similarities and differences between the two vertical coupling processes.