961 resultados para Ecosystem Function Analysis
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
AbstractObjective:The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients' body mass index.Materials and Methods:The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality.Results:An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes.Conclusion:The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient.
Resumo:
The POU4F2/Brn-3b transcription factor has been identified as a potentially novel regulator of key metabolic processes. Loss of this protein in Brn-3b knockout (KO) mice causes profound hyperglycemia and insulin resistance (IR), normally associated with type 2 diabetes (T2D), whereas Brn-3b is reduced in tissues taken from obese mice fed on high-fat diets (HFD), which also develop hyperglycemia and IR. Furthermore, studies in C2C12 myocytes show that Brn-3b mRNA and proteins are induced by glucose but inhibited by insulin, suggesting that this protein is itself highly regulated in responsive cells. Analysis of differential gene expression in skeletal muscle from Brn-3b KO mice showed changes in genes that are implicated in T2D such as increased glycogen synthase kinase-3β and reduced GLUT4 glucose transporter. The GLUT4 gene promoter contains multiple Brn-3b binding sites and is directly transactivated by this transcription factor in cotransfection assays, whereas chromatin immunoprecipitation assays confirm that Brn-3b binds to this promoter in vivo. In addition, correlation between GLUT4 and Brn-3b in KO tissues or in C2C12 cells strongly supports a close association between Brn-3b levels and GLUT4 expression. Since Brn-3b is regulated by metabolites and insulin, this may provide a mechanism for controlling key genes that are required for normal metabolic processes in insulin-responsive tissues and its loss may contribute to abnormal glucose uptake.
Resumo:
Dreaming is a pure form of phenomenality, created by the brain untouched by external stimulation or behavioral activity, yet including a full range of phenomenal contents. Thus, it has been suggested that the dreaming brain could be used as a model system in a biological research program on consciousness (Revonsuo, 2006). In the present thesis, the philosophical view of biological realism is accepted, and thus, dreaming is considered as a natural biological phenomenon, explainable in naturalistic terms. The major theoretical contribution of the present thesis is that it explores dreaming from a multidisciplinary perspective, integrating information from various fields of science, such as dream research, consciousness research, evolutionary psychology, and cognitive neuroscience. Further, it places dreaming into a multilevel framework, and investigates the constitutive, etiological, and contextual explanations for dreaming. Currently, the only theory offering a full multilevel explanation for dreaming, that is, a theory including constitutive, etiological, and contextual level explanations, is the Threat Simulation Theory (TST) (Revonsuo, 2000a; 2000b). The empirical significance of the present thesis lies in the tests conducted to test this specific theory put forth to explain the form, content, and biological function of dreaming. The first step in the empirical testing of the TST was to define exact criteria for what is a ‘threatening event’ in dreams, and then to develop a detailed and reliable content analysis scale with which it is possible to empirically explore and quantify threatening events in dreams. The second step was to seek answers to the following questions derived from the TST: How frequent threatening events are in dreams? What kind of qualities these events have? How threatening events in dreams relate to the most recently encoded or the most salient memory traces of threatening events experienced in waking life? What are the effects of exposure to severe waking life threat on dreams? The results reveal that threatening events are relatively frequent in dreams, and that the simulated threats are realistic. The most common threats include aggression, are targeted mainly against the dream self, and include simulations of relevant and appropriate defensive actions. Further, real threat experiences activate the threat simulation system in a unique manner, and dream content is modulated by the activation of long term episodic memory traces with highest negative saliency. To sum up, most of the predictions of the TST tested in this thesis received considerable support. The TST presents a strong argument that explains the specific design of dreams as threat simulations. The TST also offers a plausible explanation for why dreaming would have been selected for: because dreaming interacted with the environment in such a way that enhanced fitness of ancestral humans. By referring to a single threat simulation mechanism it furthermore manages to explain a wide variety of dream content data that already exists in the literature, and to predict the overall statistical patterns of threat content in different samples of dreams. The TST and the empirical tests conducted to test the theory are a prime example of what a multidisciplinary approach to mental phenomena can accomplish. Thus far, dreaming seems to have always resided in the periphery of science, never regarded worth to be studied by the mainstream. Nevertheless, when brought to the spotlight, the study of dreaming can greatly benefit from ideas in diverse branches of science. Vice versa, knowledge learned from the study of dreaming can be applied in various disciplines. The main contribution of the present thesis lies in putting dreaming back where it belongs, that is, into the spotlight in the cross-road of various disciplines.
Resumo:
The main objective of this master’s thesis was to quantitatively study the reliability of market and sales forecasts of a certain company by measuring bias, precision and accuracy of these forecasts by comparing forecasts against actual values. Secondly, the differences of bias, precision and accuracy between markets were explained by various macroeconomic variables and market characteristics. Accuracy and precision of the forecasts seems to vary significantly depending on the market that is being forecasted, the variable that is being forecasted, the estimation period, the length of the estimated period, the forecast horizon and the granularity of the data. High inflation, low income level and high year-on-year market volatility seems to be related with higher annual market forecast uncertainty and high year-on-year sales volatility with higher sales forecast uncertainty. When quarterly market size is forecasted, correlation between macroeconomic variables and forecast errors reduces. Uncertainty of the sales forecasts cannot be explained with macroeconomic variables. Longer forecasts are more uncertain, shorter estimated period leads to higher uncertainty, and usually more recent market forecasts are less uncertain. Sales forecasts seem to be more uncertain than market forecasts, because they incorporate both market size and market share risks. When lead time is more than one year, forecast risk seems to grow as a function of root forecast horizon. When lead time is less than year, sequential error terms are typically correlated, and therefore forecast errors are trending or mean-reverting. The bias of forecasts seems to change in cycles, and therefore the future forecasts cannot be systematically adjusted with it. The MASE cannot be used to measure whether the forecast can anticipate year-on-year volatility. Instead, we constructed a new relative accuracy measure to cope with this particular situation.
Resumo:
We examine the scale invariants in the preparation of highly concentrated w/o emulsions at different scales and in varying conditions. The emulsions are characterized using rheological parameters, owing to their highly elastic behavior. We first construct and validate empirical models to describe the rheological properties. These models yield a reasonable prediction of experimental data. We then build an empirical scale-up model, to predict the preparation and composition conditions that have to be kept constant at each scale to prepare the same emulsion. For this purpose, three preparation scales with geometric similarity are used. The parameter N¿D^α, as a function of the stirring rate N, the scale (D, impeller diameter) and the exponent α (calculated empirically from the regression of all the experiments in the three scales), is defined as the scale invariant that needs to be optimized, once the dispersed phase of the emulsion, the surfactant concentration, and the dispersed phase addition time are set. As far as we know, no other study has obtained a scale invariant factor N¿Dα for the preparation of highly concentrated emulsions prepared at three different scales, which covers all three scales, different addition times and surfactant concentrations. The power law exponent obtained seems to indicate that the scale-up criterion for this system is the power input per unit volume (P/V).
Resumo:
A flow system coupled to a tungsten coil atomizer in an atomic absorption spectrometer (TCA-AAS) was developed for As(III) determination in waters, by extraction with sodium diethyldithiocarbamate (NaDDTC) as complexing agent, and by sorption of the As(III)-DDTC complex in a micro-column filled with 5 mg C18 reversed phase (10 µL dry sorbent), followed by elution with ethanol. A complete pre-concentration/elution cycle took 208 s, with 30 s sample load time (1.7 mL) and 4 s elution time (71 µL). The interface and software for the synchronous control of two peristaltic pumps (RUN/ STOP), an autosampler arm, seven solenoid valves, one injection valve, the electrothermal atomizer and the spectrometer Read function were constructed. The system was characterized and validated by analytical recovery studies performed both in synthetic solutions and in natural waters. Using a 30 s pre-concentration period, the working curve was linear between 0.25 and 6.0 µg L-1 (r = 0.9976), the retention efficiency was 94±1% (6.0 µg L-1), and the pre-concentration coefficient was 28.9. The characteristic mass was 58 pg, the mean repeatability (expressed as the variation coefficient) was 3.4% (n=5), the detection limit was 0.058 µg L-1 (4.1 pg in 71 µL of eluate injected into the coil), and the mean analytical recovery in natural waters was 92.6 ± 9.5 % (n=15). The procedure is simple, economic, less prone to sample loss and contamination and the useful lifetime of the micro-column was between 200-300 pre-concentration cycles.
Resumo:
In this paper, we obtain sharp asymptotic formulas with error estimates for the Mellin con- volution of functions de ned on (0;1), and use these formulas to characterize the asymptotic behavior of marginal distribution densities of stock price processes in mixed stochastic models. Special examples of mixed models are jump-di usion models and stochastic volatility models with jumps. We apply our general results to the Heston model with double exponential jumps, and make a detailed analysis of the asymptotic behavior of the stock price density, the call option pricing function, and the implied volatility in this model. We also obtain similar results for the Heston model with jumps distributed according to the NIG law.
Resumo:
Fleurbaey and Maniquet have proposed the criteria of conditional equality and of egalitarian equivalence to assess the equity among individuals in an ordinal setting. Empirical applications are rare and only partially consistent with their framework. We propose a new empirical approach that relies on individual preferences, is consistent with the ordinal criteria and enables to compare them with the cardinal criteria. We estimate a utility function that incorporates individual heterogeneous preferences, obtain ordinal measures of well-being and apply conditional equality and egalitarian equivalence. We then propose two cardinal measures of well-being, that are comparable with the ordinal model, to compute Roemer’s and Van de gaer’s criteria. Finally we compare the characteristics of the worst-off displayed by each criterion. We apply this model to a sample of US micro data and obtain that about 18% of the worst-off are not common to all criteria.
Resumo:
The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,
Resumo:
Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state-of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem
Resumo:
Saponins are natural soaplike foam-forming compounds widely used in foods, cosmetic and pharmaceutical preparations. In this work foamability and foam lifetime of foams obtained from Ilex paraguariensis unripe fruits were analyzed. Polysorbate 80 and sodium dodecyl sulfate were used as reference surfactants. Aiming a better data understanding a linearized 4-parameters Weibull function was proposed. The mate hydroethanolic extract (ME) and a mate saponin enriched fraction (MSF) afforded foamability and foam lifetime comparable to the synthetic surfactants. The linearization of the Weibull equation allowed the statistical comparison of foam decay curves, improving former mathematical approaches.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Measurements of parameters expressed in terms of carbonic species such as Alkalinity and Acidity of saline waters do not analyze the influence of external parameters to the titration such as Total free and associated Carbonic Species Concentration, activity coefficient, ion pairing formation and Residual Liquid Junction Potential in pH measurements. This paper shows the development of F5BC titration function based on the titrations developed by Gran (1952) for the carbonate system of natural waters. For practical use, samples of saline waters from Pocinhos reservoir in Paraiba were submitted to titration and linear regression analysis. Results showed that F5BC involves F1x and F2x Gran functions determination, respectively, for Alkalinity and Acidity calculations without knowing "a priori" the endpoint of the titration. F5BC also allows the determination of the First and Second Apparent Dissociation Constant of the carbonate system of saline and high ionic strength waters.
Resumo:
B lymphocytes constitute a key branch of adaptive immunity by providing specificity to recognize a vast variety of antigens by B cell antigen receptors (BCR) and secreted antibodies. Antigen recognition activates the cells and can produce antibody secreting plasma cells via germinal center reaction that leads to the maturation of antigen recognition affinity and switching of antibody effector class. The specificity of antigen recognition is achieved through a multistep developmental pathway that is organized by interplay of transcription factors and signals through BCR. Lymphoid malignancies arise from different stages of development in abnormal function of transcriptional regulation. To understand the B cell development and the function of B cells, a thorough understanding of the regulation of gene expression is important. The transcription factors of the Ikaros family and Bcl6 are frequently associated with lymphoma generation. The aim of this study was to reveal the targets of Ikaros, Helios and Bcl6 mediated gene regulation and to find out the function of Ikaros and Helios in B cells. This study uses gene targeted DT40 B cell lines and establishes a role for Ikaros family factors Ikaros and Helios in the regulation of BCR signaling that is important at developmental checkpoints, for cell survival and in activation. Ikaros and Helios had opposing roles in the regulation of BCR signals. Ikaros was found to directly repress the SHIP gene that encodes a signaling lipid-metabolizing enzyme, whereas Helios had activating effect on SHIP expression. The findings demonstrate a balancing function for these two Ikaros family transcription factors in the regulation of BCR signaling as well as in the regulation of gene expression. Bcl6 was found to repress plasma cell gene expression program while maintaining gene expression profile of B cells. Analysis of direct Bcl6 target genes suggested novel mechanisms for Bcl6-mediated suppression of plasma cell differentiation and promoting germinal center phenotype.