973 resultados para Non-linear Bending


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use sunspot group observations from the Royal Greenwich Observatory (RGO) to investigate the effects of intercalibrating data from observers with different visual acuities. The tests are made by counting the number of groups RB above a variable cut-off threshold of observed total whole-spot area (uncorrected for foreshortening) to simulate what a lower acuity observer would have seen. The synthesised annual means of RB are then re-scaled to the full observed RGO group number RA using a variety of regression techniques. It is found that a very high correlation between RA and RB (rAB > 0.98) does not prevent large errors in the intercalibration (for example sunspot maximum values can be over 30 % too large even for such levels of rAB). In generating the backbone sunspot number (RBB), Svalgaard and Schatten (2015, this issue) force regression fits to pass through the scatter plot origin which generates unreliable fits (the residuals do not form a normal distribution) and causes sunspot cycle amplitudes to be exaggerated in the intercalibrated data. It is demonstrated that the use of Quantile-Quantile (“Q  Q”) plots to test for a normal distribution is a useful indicator of erroneous and misleading regression fits. Ordinary least squares linear fits, not forced to pass through the origin, are sometimes reliable (although the optimum method used is shown to be different when matching peak and average sunspot group numbers). However, other fits are only reliable if non-linear regression is used. From these results it is entirely possible that the inflation of solar cycle amplitudes in the backbone group sunspot number as one goes back in time, relative to related solar-terrestrial parameters, is entirely caused by the use of inappropriate and non-robust regression techniques to calibrate the sunspot data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Model simulations of the next few decades are widely used in assessments of climate change impacts and as guidance for adaptation. Their non-linear nature reveals a level of irreducible uncertainty which it is important to understand and quantify, especially for projections of near-term regional climate. Here we use large idealised initial condition ensembles of the FAMOUS global climate model with a 1 %/year compound increase in CO2 levels to quantify the range of future temperatures in model-based projections. These simulations explore the role of both atmospheric and oceanic initial conditions and are the largest such ensembles to date. Short-term simulated trends in global temperature are diverse, and cooling periods are more likely to be followed by larger warming rates. The spatial pattern of near-term temperature change varies considerably, but the proportion of the surface showing a warming is more consistent. In addition, ensemble spread in inter-annual temperature declines as the climate warms, especially in the North Atlantic. Over Europe, atmospheric initial condition uncertainty can, for certain ocean initial conditions, lead to 20 year trends in winter and summer in which every location can exhibit either strong cooling or rapid warming. However, the details of the distribution are highly sensitive to the ocean initial condition chosen and particularly the state of the Atlantic meridional overturning circulation. On longer timescales, the warming signal becomes more clear and consistent amongst different initial condition ensembles. An ensemble using a range of different oceanic initial conditions produces a larger spread in temperature trends than ensembles using a single ocean initial condition for all lead times. This highlights the potential benefits from initialising climate predictions from ocean states informed by observations. These results suggest that climate projections need to be performed with many more ensemble members than at present, using a range of ocean initial conditions, if the uncertainty in near-term regional climate is to be adequately quantified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-term monitoring of surface water quality has shown increasing concentrations of Dissolved Organic Carbon (DOC) across a large part of the Northern Hemisphere. Several drivers have been implicated including climate change, land management change, nitrogen and sulphur deposition and CO2 enrichment. Analysis of stream water data, supported by evidence from laboratory studies, indicates that an effect of declining sulphur deposition on catchment soil chemistry is likely to be the primary mechanism, but there are relatively few long term soil water chemistry records in the UK with which to investigate this, and other, hypotheses directly. In this paper, we assess temporal relationships between soil solution chemistry and parameters that have been argued to regulate DOC production and, using a unique set of co-located measurements of weather and bulk deposition and soil solution chemistry provided by the UK Environmental Change Network and the Intensive Forest Monitoring Level II Network . We used statistical non-linear trend analysis to investigate these relationships at 5 forested and 4 non-forested sites from 1993 to 2011. Most trends in soil solution DOC concentration were found to be non-linear. Significant increases in DOC occurred mostly prior to 2005. The magnitude and sign of the trends was associated qualitatively with changes in acid deposition, the presence/absence of a forest canopy, soil depth and soil properties. The strongest increases in DOC were seen in acidic forest soils and were most clearly linked to declining anthropogenic acid deposition, while DOC trends at some sites with westerly locations appeared to have been influenced by shorter-term hydrological variation. The results indicate that widespread DOC increases in surface waters observed elsewhere, are most likely dominated by enhanced mobilization of DOC in surficial organic horizons, rather than changes in the soil water chemistry of deeper horizons. While trends in DOC concentrations in surface horizons have flattened out in recent years, further increases may be expected as soil chemistry continues to adjust to declining inputs of acidity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes the changes the ways of organizing memory have undergone since ancient times, turning them into the current artificial memory systems. It aims to draw a parallel between the art of memory (which associates images to specific texts) and the hypertext (which also uses associations, but in a non-linear way). Our methodology consisted of a qualitative approach, involving the collection of texts about the art of memory and hypertext; this enables us to salvage the historical-cultural changes which have modified form and use of the art of memory and allowed the creation of hypertext. It also analyzes the similarities among artificial memory systems created by different cultures in order to prevent loss of knowledge produced by society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to estimate the indoor and outdoor concentrations of fungal spores in the Metropolitan Area of Sao Paulo (MASP), collected at different sites in winter/spring and summer seasons. The techniques adopted included cultivation (samples collected with impactors) and microscopic enumeration (samples collected with impingers). The overall results showed total concentrations of fungal spores as high as 36,000 per cubic meter, with a large proportion of non culturable spores (around 91% of the total). Penicillium sp. and Aspergillus sp. were the dominant species both indoors and outdoors, in all seasons tested, occurring in more than 30% of homes at very high concentrations of culturable airborne fungi [colony forming units(CFU) m(-3)]. There was no significant difference between indoor and outdoor concentrations. The total fungal spore concentration found in winter was 19% higher than that in summer. Heat and humidity were the main factors affecting fungal growth; however, a non-linear response to these factors was found. Thus, temperatures below 16A degrees C and above 25A degrees C caused a reduction in the concentration (CFU m(-3)) of airborne fungi, which fits with MASP climatalogy. The same pattern was observed for humidity, although not as clearly as with temperature given the usual high relative humidity (above 70%) in the study area. These results are relevant for public health interventions that aim to reduce respiratory morbidity among susceptible populations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, a new theoretical mechanism is presented in which equatorial Rossby and inertio-gravity wave modes may interact with each other through resonance with the diurnal cycle of tropical deep convection. We have adopted the two-layer incompressible equatorial primitive equations forced by a parametric heating that roughly represents deep convection activity in the tropical atmosphere. The heat source was parametrized in the simplest way according to the hypothesis that it is proportional to the lower-troposphere moisture convergence, with the background moisture state function mimicking the structure of the ITCZ. In this context, we have investigated the possibility of resonant interaction between equatorially trapped Rossby and inertio-gravity modes through the diurnal cycle of the background moisture state function. The reduced dynamics of a single resonant duo shows that when this diurnal variation is considered, a Rossby wave mode can undergo significant amplitude modulations when interacting with an inertio-gravity wave mode, which is not possible in the context of the resonant triad non-linear interaction. Therefore, the results suggest that the diurnal variation of the ITCZ can be a possible dynamical mechanism that leads the Rossby waves to be significantly affected by high frequency modes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerical experiments with the Brazilian additions to the Regional Atmospheric Modeling System were performed with two nested grids (50 and 10 km horizontal resolution, respectively) with and without the effect of biomass burning for 8 different situations for 96 h integrations. Only the direct radiative effect of aerosols is considered. The results were analyzed in large areas encompassing the BR163 road (one of the main areas of deforestation in the Amazon). mainly where most of the burning takes place. The precipitation change due to the direct radiative impact of biomass burning is generally negative (i.e., there is a decrease of precipitation). However, there are a few cases with a positive impact. Two opposite forcing mechanisms were explored: (a) the thermodynamic forcing that is generally negative in the sense that the aerosol tends to stabilize the lower atmosphere and (b) the dynamic impact associated with the low level horizontal pressure gradients produced by the aerosol plumes. In order to understand the non-linear relationship between the two effects, experiments were performed with 4-fold emissions. In these cases, the dynamic effect overcomes the stabilization produced by the radiative forcing and precipitation increase is observed in comparison with the control experiment. This study suggests that. in general, the biomass burning radiative forcing decreases the precipitation. However, very large concentrations of aerosols may lead to an increase of precipitation due to the dynamical forcing associated with the horizontal pressure gradients. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we construct a dynamic portrait of the inner asteroidal belt. We use information about the distribution of test particles, which were initially placed on a perfectly rectangular grid of initial conditions, after 4.2 Myr of gravitational interactions with the Sun and five planets, from Mars to Neptune. Using the spectral analysis method introduced by Michtchenko et al., the asteroidal behaviour is illustrated in detail on the dynamical, averaged and frequency maps. On the averaged and frequency maps, we superpose information on the proper elements and proper frequencies of real objects, extracted from the data base, AstDyS, constructed by Milani and Knezevic. A comparison of the maps with the distribution of real objects allows us to detect possible dynamical mechanisms acting in the domain under study; these mechanisms are related to mean-motion and secular resonances. We note that the two- and three-body mean-motion resonances and the secular resonances (strong linear and weaker non-linear) have an important role in the diffusive transportation of the objects. Their long-lasting action, overlaid with the Yarkovsky effect, may explain many observed features of the density, size and taxonomic distributions of the asteroids.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The production of PHA from plant oils by Pseudomonas species soil isolated from a sugarcane crop was evaluated. Out of 22 bacterial strains three were able to use efficiently plant oils to grow and to accumulate PHA. Pseudomonas putida and Pseudomonas aeruginosa strains produced PHA presenting differences on monomer composition compatible with variability on monomer specificity of their PHA biosynthesis system. The molar fraction of 3-hydroxydodecanoate detected in the PHA was linearly correlated to the oleic acid supplied. A non-linear relationship between the molar fractions of 3-hydroxy-6-dodecenoate (3HDd Delta(6)) detected in PHA and the linoleic acid supplied was observed, compatible with saturation in the biosynthesis system capability to channel intermediate of P-oxidation to PHA synthesis. Although P. putida showed a higher 3HDd Delta(6) yield from linoleic acid when compared to P. aeruginosa, in both species it was less than 10% of the maximum theoretical value. These results contribute to the knowledge about the biosynthesis of PHA with a controlled composition from plant oils allowing in the future establishing the production of these polyesters as tailor-made polymers. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.