881 resultados para Linear Analytical Systems
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.
Resumo:
Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.
Resumo:
The educational sphere has an internal function relatively agreed by social scientists. Nonetheless, the contribution that educational systems provide to the society (i.e., their social function) does not have the same degree of consensus. Taking into consideration such theoretical precedent, the current article raises an analytical schema to grasp the social function of education considering a sociological perspective. Starting from the assumption that there is an intrinsic relationship between the internal and social functions of social systems, we suggest there are particular stratification determinants modifying the internal pedagogical function of education, which impact on its social function by creating simultaneous conditions of equity and differentiation. Throughout the paper this social function is considered a paradoxical mechanism. We highlight how this paradoxical dynamic is deployed in different structural levels of the educational sphere. Additionally, we discuss eventual consequences of this paradoxical social function for the inclusion possibilities that educational systems offer to individuals.
Resumo:
Soil organic matter (SOM) plays a crucial role in soil quality and can act as an atmospheric C-CO2 sink under conservationist management systems. This study aimed to evaluate the long-term effects (19 years) of tillage (CT-conventional tillage and NT-no tillage) and crop rotations (R0-monoculture system, R1-winter crop rotation, and R2- intensive crop rotation) on total, particulate and mineral-associated organic carbon (C) stocks of an originally degraded Red Oxisol in Cruz Alta, RS, Southern Brazil. The climate is humid subtropical Cfa 2a (Köppen classification), the mean annual precipitation 1,774 mm and mean annual temperature 19.2 ºC. The plots were divided into four segments, of which each was sampled in the layers 0-0.05, 0.05-0.10, 0.10-0.20, and 0.20-0.30 m. Sampling was performed manually by opening small trenches. The SOM pools were determined by physical fractionation. Soil C stocks had a linear relationship with annual crop C inputs, regardless of the tillage systems. Thus, soil disturbance had a minor effect on SOM turnover. In the 0-0.30 m layer, soil C sequestration ranged from 0 to 0.51 Mg ha-1 yr-1, using the CT R0 treatment as base-line; crop rotation systems had more influence on soil stock C than tillage systems. The mean C sequestration rate of the cropping systems was 0.13 Mg ha-1 yr-1 higher in NT than CT. This result was associated to the higher C input by crops due to the improvement in soil quality under long-term no-tillage. The particulate C fraction was a sensitive indicator of soil management quality, while mineral-associated organic C was the main pool of atmospheric C fixed in this clayey Oxisol. The C retention in this stable SOM fraction accounts for 81 and 89 % of total C sequestration in the treatments NT R1 and NT R2, respectively, in relation to the same cropping systems under CT. The highest C management index was observed in NT R2, confirming the capacity of this soil management practice to improve the soil C stock qualitatively in relation to CT R0. The results highlighted the diversification of crop rotation with cover crops as a crucial strategy for atmospheric C-CO2 sequestration and SOM quality improvement in highly weathered subtropical Oxisols.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
We have studied the growth of interfaces in driven diffusive systems well below the critical temperature by means of Monte Carlo simulations. We consider the region beyond the linear regime and of large values of the external field which has not been explored before. The simulations support the existence of interfacial traveling waves when asymmetry is introduced in the model, a result previously predicted by a linear-stability analysis. Furthermore, the generalization of the Gibbs-Thomson relation is discussed. The results provide evidence that the external field is a stabilizing effect which can be considered as effectively increasing the surface tension.
Resumo:
We study the problem of front propagation in the presence of inertia. We extend the analytical approach for the overdamped problem to this case, and present numerical results to support our theoretical predictions. Specifically, we conclude that the velocity and shape selection problem can still be described in terms of the metastable, nonlinear, and linear overdamped regimes. We study the characteristic relaxation dynamics of these three regimes, and the existence of degenerate (¿quenched¿) solutions.
Resumo:
A simple model is introduced that exhibits a noise-induced front propagation and where the noise enters multiplicatively. The invasion of the unstable state is studied, both theoretically and numerically. A good agreement is obtained for the mean value of the order parameter and the mean front velocity using the analytical predictions of the linear marginal stability analysis.
Resumo:
Water erosion is the major cause of soil and water losses and the main factor of degradation of agricultural areas. The objective of this work was to quantify pluvial water erosion from an untilled soil with crop rows along the contour, in 2009 and 2010, on a Humic Dystrupept, with the following treatments: a) maize monoculture; b) soybean monoculture; c) common bean monoculture; d) intercropped maize and bean, exposed to four simulated rainfall tests of on hour at controlled intensity (64 mm h-1). The first test was applied 18 days after sowing and the others; 39, 75 and 120 days after the first test. The crop type influenced soil loss through water erosion in the simulated rainfall tests 3 and 4; soybean was most effective in erosion control in test 3, however, in test 4, maize was more effective. Water loss was influenced by the crop type in test 3 only, where maize and soybean were equally effective, with less runoff than from the other crops. The soil loss rate varied during the runoff sampling period in different ways, demonstrating a positive linear relationship between soil and water loss, in the different rainfall tests.
Resumo:
The aim of the present study was to establish and compare the durations of the seminiferous epithelium cycles of the common shrew Sorex araneus, which is characterized by a high metabolic rate and multiple paternity, and the greater white-toothed shrew Crocidura russula, which is characterized by a low metabolic rate and a monogamous mating system. Twelve S. araneus males and fifteen C. russula males were injected intraperitoneally with 5-bromodeoxyuridine, and the testes were collected. For cycle length determinations, we applied the classical method of estimation and linear regression as a new method. With regard to variance, and even with a relatively small sample size, the new method seems to be more precise. In addition, the regression method allows the inference of information for every animal tested, enabling comparisons of different factors with cycle lengths. Our results show that not only increased testis size leads to increased sperm production, but it also reduces the duration of spermatogenesis. The calculated cycle lengths were 8.35 days for S. araneus and 12.12 days for C. russula. The data obtained in the present study provide the basis for future investigations into the effects of metabolic rate and mating systems on the speed of spermatogenesis.
Resumo:
We study the effects of the magnetic field on the relaxation of the magnetization of smallmonodomain noninteracting particles with random orientations and distribution of anisotropyconstants. Starting from a master equation, we build up an expression for the time dependence of themagnetization which takes into account thermal activation only over barriers separating energyminima, which, in our model, can be computed exactly from analytical expressions. Numericalcalculations of the relaxation curves for different distribution widths, and under different magneticfields H and temperatures T, have been performed. We show how a T ln(t/t0) scaling of the curves,at different T and for a given H, can be carried out after proper normalization of the data to theequilibrium magnetization. The resulting master curves are shown to be closely related to what wecall effective energy barrier distributions, which, in our model, can be computed exactly fromanalytical expressions. The concept of effective distribution serves us as a basis for finding a scalingvariable to scale relaxation curves at different H and a given T, thus showing that the fielddependence of energy barriers can be also extracted from relaxation measurements.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resumo:
In addition to the importance of sample preparation and extract separation, MS detection is a key factor in the sensitive quantification of large undigested peptides. In this article, a linear ion trap MS (LIT-MS) and a triple quadrupole MS (TQ-MS) have been compared in the detection of large peptides at subnanomolar concentrations. Natural brain natriuretic peptide, C-peptide, substance P and D-Junk-inhibitor peptide, a full D-amino acid therapeutic peptide, were chosen. They were detected by ESI and simultaneous MS(1) and MS(2) acquisitions. With direct peptide infusion, MS(2) spectra revealed that fragmentation was peptide dependent, milder on the LIT-MS and required high collision energies on the TQ-MS to obtain high-intensity product ions. Peptide adsorption on surfaces was overcome and peptide dilutions ranging from 0.1 to 25 nM were injected onto an ultra high-pressure LC system with a 1 mm id analytical column and coupled with the MS instruments. No difference was observed between the two instruments when recording in LC-MS(1) acquisitions. However, in LC-MS(2) acquisitions, a better sensitivity in the detection of large peptides was observed with the LIT-MS. Indeed, with the three longer peptides, the typical fragmentation in the TQ-MS resulted in a dramatic loss of sensitivity (> or = 10x).