893 resultados para Pure points of a measure
Resumo:
The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.
Resumo:
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We �rst show that by using a general functional decomposition for space-time dependent forcings, we can de�ne elementary susceptibilities that allow to construct the response of the system to general perturbations. Starting from the de�nition of SRB measure, we then study the consequence of taking di�erent sampling schemes for analysing the response of the system. We show that only a speci�c choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows to obtain the formula �rst presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be �ne-tuned to make the de�nition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analyzing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Resumo:
Some points of the paper by N.K. Nichols (see ibid., vol.AC-31, p.643-5, 1986), concerning the robust pole assignment of linear multiinput systems, are clarified. It is stressed that the minimization of the condition number of the closed-loop eigenvector matrix does not necessarily lead to robustness of the pole assignment. It is shown why the computational method, which Nichols claims is robust, is in fact numerically unstable with respect to the determination of the gain matrix. In replying, Nichols presents arguments to support the choice of the conditioning of the closed-loop poles as a measure of robustness and to show that the methods of J Kautsky, N. K. Nichols and P. VanDooren (1985) are stable in the sense that they produce accurate solutions to well-conditioned problems.
Resumo:
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging.
Resumo:
The UK Department for Environment, Food and Rural Affairs (Defra) identified practices to reduce the risk of animal disease outbreaks. We report on the response of sheep and pig farmers in England to promotion of these practices. A conceptual framework was established from research on factors influencing adoption of animal health practices, linking knowledge, attitudes, social influences and perceived constraints to the implementation of specific practices. Qualitative data were collected from nine sheep and six pig enterprises in 2011. Thematic analysis explored attitudes and responses to the proposed practices, and factors influencing the likelihood of implementation. Most feel they are doing all they can reasonably do to minimise disease risk and that practices not being implemented are either not relevant or ineffective. There is little awareness and concern about risk from unseen threats. Pig farmers place more emphasis than sheep farmers on controlling wildlife, staff and visitor management and staff training. The main factors that influence livestock farmers’ decision on whether or not to implement a specific disease risk measure are: attitudes to, and perceptions of, disease risk; attitudes towards the specific measure and its efficacy; characteristics of the enterprise which they perceive as making a measure impractical; previous experience of a disease or of the measure; and the credibility of information and advice. Great importance is placed on access to authoritative information with most seeing vets as the prime source to interpret generic advice from national bodies in the local context. Uptake of disease risk measures could be increased by: improved risk communication through the farming press and vets to encourage farmers to recognise hidden threats; dissemination of credible early warning information to sharpen farmers’ assessment of risk; and targeted information through training events, farming press, vets and other advisers, and farmer groups, tailored to the different categories of livestock farmer.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
This chapter introduces the latest practices and technologies in the interactive interpretation of environmental data. With environmental data becoming ever larger, more diverse and more complex, there is a need for a new generation of tools that provides new capabilities over and above those of the standard workhorses of science. These new tools aid the scientist in discovering interesting new features (and also problems) in large datasets by allowing the data to be explored interactively using simple, intuitive graphical tools. In this way, new discoveries are made that are commonly missed by automated batch data processing. This chapter discusses the characteristics of environmental science data, common current practice in data analysis and the supporting tools and infrastructure. New approaches are introduced and illustrated from the points of view of both the end user and the underlying technology. We conclude by speculating as to future developments in the field and what must be achieved to fulfil this vision.
Resumo:
The distribution of dust in the ecliptic plane between 0.96 and 1.04 au has been inferred from impacts on the two Solar Terrestrial Relations Observatory (STEREO) spacecraft through observation of secondary particle trails and unexpected off-points in the heliospheric imager (HI) cameras. This study made use of analysis carried out by members of a distributed web-based citizen science project Solar Stormwatch. A comparison between observations of the brightest particle trails and a survey of fainter trails shows consistent distributions. While there is no obvious correlation between this distribution and the occurrence of individual meteor streams at Earth, there are some broad longitudinal features in these distributions that are also observed in sources of the sporadic meteor population. The different position of the HI instrument on the two STEREO spacecraft leads to each sampling different populations of dust particles. The asymmetry in the number of trails seen by each spacecraft and the fact that there are many more unexpected off-points in the HI-B than in HI-A indicates that the majority of impacts are coming from the apex direction. For impacts causing off-points in the HI-B camera, these dust particles are estimated to have masses in excess of 10−17 kg with radii exceeding 0.1 μm. For off-points observed in the HI-A images, which can only have been caused by particles travelling from the anti-apex direction, the distribution is consistent with that of secondary ‘storm’ trails observed by HI-B, providing evidence that these trails also result from impacts with primary particles from an anti-apex source. Investigating the mass distribution for the off-points of both HI-A and HI-B, it is apparent that the differential mass index of particles from the apex direction (causing off-points in HI-B) is consistently above 2. This indicates that the majority of the mass is within the smaller particles of this population. In contrast, the differential mass index of particles from the anti-apex direction (causing off-points in HI-A) is consistently below 2, indicating that the majority of the mass is to be found in larger particles of this distribution.
Resumo:
The authors identified several specific problems with the measurement of achievement goals in the current literature and illustrated these problems, focusing primarily on A. J. Elliot and H. A. McGregor's (2001) Achievement Goal Questionnaire (AGQ). They attended to these problems by creating the AGQ-Revised and conducting a study that examined the measure's structural validity and predictive utility with 229 (76 male, 150 female, 3 unspecified) undergraduates. The hypothesized factor and dimensional structures of the measure were confirmed and shown to be superior to a host of alternatives. The predictions were nearly uniformly supported with regard to both the antecedents (need for achievement and fear of failure) and consequences (intrinsic motivation and exam performance) of the 4 achievement goals. In discussing their work, the authors highlight the importance and value of additional precision in the area of achievement goal measurement. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)
Resumo:
This article is the guest editors' introduction to a special issue on using Social Network Research in the field of Human Resource Management. The goals of the special issue are: (1) to draw attention to the points of integration between the two fields, (2) to showcase research that applies social network perspectives and methodology to issues relevant to HRM and (3) to identify common challenges where future collaborative efforts could contribute to advancements in both fields.
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
The link between natural ion-line enhancements in radar spectra and auroral activity has been the subject of recent studies but conclusions have been limited by the spatial and temporal resolution previously available. The next challenge is to use shorter sub-second integration times in combination with interferometric programmes to resolve spatial structure within the main radar beam, and so relate enhanced filaments to individual auroral rays. This paper presents initial studies of a technique, using optical and spectral satellite signatures, to calibrate the received phase of a signal with the position of the scattering source along the interferometric baseline of the EISCAT Svalbard Radar. It is shown that a consistent relationship can be found only if the satellite passage through the phase fringes is adjusted from the passage predicted by optical tracking. This required adjustment is interpreted as being due to the vector between the theoretical focusing points of the two antennae, i.e. the true radar baseline, differing from the baseline obtained by survey between the antenna foot points. A method to obtain a measurement of the true interferometric baseline using multiple satellite passes is outlined.
Resumo:
Given a dataset of two-dimensional points in the plane with integer coordinates, the method proposed reduces a set of n points down to a set of s points s ≤ n, such that the convex hull on the set of s points is the same as the convex hull of the original set of n points. The method is O(n). It helps any convex hull algorithm run faster. The empirical analysis of a practical case shows a percentage reduction in points of over 98%, that is reflected as a faster computation with a speedup factor of at least 4.
Resumo:
BACKGROUND: Chemical chitin extraction generates large amounts of wastes and increases partial deacetylation of the product. Therefore, the use of biological methods for chitin extraction is an interesting alternative. The effects of process conditions on enzyme assisted extraction of chitin from the shrimp shells in a systematic way were the focal points of this study. RESULTS: Demineralisation conditions of 25C, 20 min, shells-lactic acid ratio of 1:1.1 w/w; and shells-acetic acid ratio of 1:1.2 w/w, the maximum demineralisation values were 98.64 and 97.57% for lactic and acetic acids, respectively. A total protein removal efficiency of 91.10% by protease from Streptomyces griseus with enzyme-substrate ratio 55 U/g, pH 7.0 and incubation time 3 h is obtained when the particle size range is 50-25 μm, which was identified as the most critical factor. The X-ray diffraction and 13C NMR spectroscopy analysis showed that the lower percent crystallinity and higher degree of acetylation of chitin from enzyme assisted extraction may exhibit better solubility properties and less depolymerisation in comparison with chitin from the chemical extraction. CONCLUSION: The present work investigates the effects of individual factors on process yields, and it has shown that, if the particle size is properly controlled a reaction time of 3 h is more than enough for deproteination by protease. Physicochemical analysis indicated that the enzyme assisted production of chitin seems appropriate to extract chitin, possibly retaining its native structure.
Resumo:
European beech (Fagus sylvatica L.) and Norway spruce (Picea abies Karst.) are two of the most ecologically and economically important forest tree species in Europe. These two species co-occur in many locations in Europe, leading to direct competition for canopy space. Foliage characteristics of two naturally regenerated pure stands of beech and spruce with fully closed canopies were contrasted to assess the dynamic relationship between foliage adaptability to shading, stand LAI and tree growth. We found that individual leaf size is far more conservative in spruce than in beech. Individual leaf and needle area was larger at the top than at the bottom of the canopy in both species. Inverse relationship was found for specific leaf area (SLA), highest SLA values were found at lowest light availability under the canopy. There was no difference in leaf area index (LAI) between the two stands, however LAI increased from 10.8 to 14.6 m2m-2 between 2009 and 2011. Dominant trees of both species were more efficient in converting foliage mass or area to produce stem biomass, although this relationship changed with age and was species-specific. Overall, we found larger foliage plasticity in beech than in spruce in relation to light conditions, indicating larger capacity to exploit niche openings.