989 resultados para a-stable processes


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The brewing industry produces large amounts of by-products and wastes like brewers' spent grain (BSG). In Germany, each year approximately 2.1 million tonnes of BSG are generated. During the last years conventional routes of BSG utilization face a remarkable change, such as the decline in the demand as animal feed. Due to its high content of organic matter energetic utilization may create an additional economic value for breweries. Furthermore, in the recent past breweries tend to shift their energy supply towards more sustainable concepts. Although, a decent number of research projects were carried out already, still no mature strategy is available. However, one possible solution can be the mechanical pretreatment of BSG. This step allows optimized energy utilization by the fractionation of BSG. Due to the transfer of digestible components, such as protein, to the liquid phase, the solid phase will largely consist of combustible components. That represents an opportunity to produce a solid biofuel with lower fuelnitrogen content compared to only thermal dried BSG. Therefore, two main purposes for the mechanical pre-treatment were determined, (1) to reduce the moisture content to at least 60 % (w/w) and (2) to diminish the protein content of the solid phase by 30 %. Moreover, the combustion trials should demonstrate whether stable processes and flue gas emissions within the legal limits in Germany are feasible. The results of the mechanical pre-treatment trials showed that a decrease of the moisture and protein content has been achieved. With regard to the combustion trials inconsistent outcomes were found. On the one hand a stable combustion was realized. On the other hand the legal emission levels of NOx (500 mgm -3) and dust (50 mgm-3) could not be kept during all trials. The further research steps will focus on the optimization of the air/fuel ratio by reducing the primary and secondary air conditions. Copyright © 2014,AIDIC Servizi S.r.l.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study was part of an integrated project developed in response to concerns regarding current and future land practices affecting water quality within coastal catchments and adjacent marine environments. Two forested coastal catchments on the Fraser Coast, Australia, were chosen as examples of low-modification areas with similar geomorphological and land-use characteristics to many other coastal zones in southeast Queensland. For this component of the overall project, organic , physico-chemical (Eh, pH and DO), ionic (Fe2+, Fe3+), and isotopic (ä13CDIC, ä15NDIN ä34SSO4) data were used to characterise waters and identify sources and processes contributing to concentrations and form of dissolved Fe, C, N and S within the ground and surface waters of these coastal catchments. Three sites with elevated Fe concentrations are discussed in detail. These included a shallow pool with intermittent interaction with the surface water drainage system, a monitoring well within a semi-confined alluvial aquifer, and a monitoring well within the fresh/saline water mixing zone adjacent to an estuary. Conceptual models of processes occurring in these environments are presented. The primary factors influencing Fe transport were; microbial reduction of Fe3+ oxyhydroxides in groundwaters and in the hyporheic zone of surface drainage systems, organic input available for microbial reduction and Fe3+ complexation, bacterial activity for reduction and oxidation, iron curtain effects where saline/fresh water mixing occurs, and variation in redox conditions with depth in ground and surface water columns. Data indicated that groundwater seepage appears a more likely source of Fe to coastal waters (during periods of low rainfall) via tidal flux. The drainage system is ephemeral and contributes little discharge to marine waters. However, data collected during a high rainfall event indicated considerable Fe loads can be transported to the estuary mouth from the catchment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Copper and Zn are essential micronutrients for plants, animals, and humans; however, they may also be pollutants if they occur at high concentrations in soil. Therefore, knowledge of Cu and Zn cycling in soils is required both for guaranteeing proper nutrition and to control possible risks arising from pollution.rnThe overall objective of my study was to test if Cu and Zn stable isotope ratios can be used to investigate into the biogeochemistry, source and transport of these metals in soils. The use of stable isotope ratios might be especially suitable to trace long-term processes occurring during soil genesis and transport of pollutants through the soil. In detail, I aimed to answer the questions, whether (1) Cu stable isotopes are fractionated during complexation with humic acid, (2) 65Cu values can be a tracer for soil genetic processes in redoximorphic soils (3) 65Cu values can help to understand soil genetic processes under oxic weathering conditions, and (4) 65Cu and 66Zn values can act as tracers of sources and transport of Cu and Zn in polluted soils.rnTo answer these questions, I ran adsorption experiments at different pH values in the laboratory and modelled Cu adsorption to humic acid. Furthermore, eight soils were sampled representing different redox and weathering regimes of which two were influenced by stagnic water, two by groundwater, two by oxic weathering (Cambisols), and two by podzolation. In all horizons of these soils, I determined selected basic soil properties, partitioned Cu into seven operationally defined fractions and determined Cu concentrations and Cu isotope ratios (65Cu values). Finally, three additional soils were sampled along a deposition gradient at different distances to a Cu smelter in Slovakia and analyzed together with bedrock and waste material from the smelter for selected basic soil properties, Cu and Zn concentrations and 65Cu and 66Zn values.rnMy results demonstrated that (1) Copper was fractionated during adsorption on humic acid resulting in an isotope fractionation between the immobilized humic acid and the solution (65CuIHA-solution) of 0.26 ± 0.11‰ (2SD) and that the extent of fractionation was independent of pH and involved functional groups of the humic acid. (2) Soil genesis and plant cycling causes measurable Cu isotope fractionation in hydromorphic soils. The results suggested that an increasing number of redox cycles depleted 63Cu with increasing depth resulting in heavier 65Cu values. (3) Organic horizons usually had isotopically lighter Cu than mineral soils presumably because of the preferred uptake and recycling of 63Cu by plants. (4) In a strongly developed Podzol, eluviation zones had lighter and illuviation zones heavier 65Cu values because of the higher stability of organo-65Cu complexes compared to organo-63Cu complexes. In the Cambisols and a little developed Podzol, oxic weathering caused increasingly lighter 65Cu values with increasing depth, resulting in the opposite depth trend as in redoximorphic soils, because of the preferential vertical transport of 63Cu. (5) The 66Zn values were fractionated during the smelting process and isotopically light Zn was emitted allowing source identification of Zn pollution while 65Cu values were unaffected by the smelting and Cu emissions isotopically indistinguishable from soil. The 65Cu values in polluted soils became lighter down to a depth of 0.4 m indicating isotope fractionation during transport and a transport depth of 0.4 m in 60 years. 66Zn values had an opposite depth trend becoming heavier with depth because of fractionation by plant cycling, speciation changes, and mixing of native and smelter-derived Zn. rnCopper showed measurable isotope fractionation of approximately 1‰ in unpolluted soils, allowing to draw conclusions on plant cycling, transport, and redox processes occurring during soil genesis and 65Cu and 66Zn values in contaminated soils allow for conclusions on sources (in my study only possible for Zn), biogeochemical behavior, and depth of dislocation of Cu and Zn pollution in soil. I conclude that stable Cu and Zn isotope ratios are a suitable novel tool to trace long-term processes in soils which are difficult to assess otherwise.rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Foliar samples were harvested from two oaks, a beech, and a yew at the same site in order to trace the development of the leaves over an entire vegetation season. Cellulose yield and stable isotopic compositions (δ13C, δ18O, and δD) were analyzed on leaf cellulose. All parameters unequivocally define a juvenile and a mature period in the foliar expansion of each species. The accompanying shifts of the δ13C-values are in agreement with the transition from remobilized carbohydrates (juvenile period), to current photosynthates (mature phase). While the opponent seasonal trends of δ18O of blade and vein cellulose are in perfect agreement with the state-of-art mechanistic understanding, the lack of this discrepancy for δD, documented for the first time, is unexpected. For example, the offset range of 18 permil (oak veins) to 57 permil (oak blades) in δD may represent a process driven shift from autotrophic to heterotrophic processes. The shared pattern between blade and vein found for both oak and beech suggests an overwhelming metabolic isotope effect on δD that might be accompanied by proton transfer linked to the Calvin-cycle. These results provide strong evidence that hydrogen and oxygen are under different biochemical controls even at the leaf level.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Team games conceptualized as dynamical systems engender a view of emergent decision-making behaviour under constraints, although specific effects of instructional and body-scaling constraints have yet to be verified empirically. For this purpose, we studied the effects of task and individual constraints on decision-making processes in basketball. Eleven experienced female players performed 350 trials in 1 vs. 1 sub-phases of basketball in which an attacker tried to perturb the stable state of a dyad formed with a defender (i.e. break the symmetry). In Experiment 1, specific instructions (neutral, risk taking or conservative) were manipulated to observe effects on emergent behaviour of the dyadic system. When attacking players were given conservative instructions, time to cross court mid-line and variability of the attacker's trajectory were significantly greater. In Experiment 2, body-scaling of participants was manipulated by creating dyads with different height relations. When attackers were considerably taller than defenders, there were fewer occurrences of symmetry-breaking. When attackers were considerably shorter than defenders, time to cross court mid-line was significantly shorter than when dyads were composed of athletes of similar height or when attackers were considerably taller than defenders. The data exemplify how interacting task and individual constraints can influence emergent decision-making processes in team ball games.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of Enterprise Resource Planning systems and Business Process Management have led to improvements in the design, implementation, and overall management of business processes. However, the typical focus of these initiatives has been on internal business operations, assuming a defined and stable context in which the processes are designed to operate. Yet, a lack of context-awareness for external change leads to processes and supporting information systems that are unable to react appropriately and timely enough to change. To increase the alignment of processes with environmental change, we propose a conceptual framework that facilitates the identification of context change. Based on a secondary data analysis of published case studies about process adaptation, we exemplify the framework and identify four general archetypes of context-awareness. The framework, in combination with the learning from the case analysis, provides a first understanding of what, where, how, and when processes are subjected to change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Workflow Management Systems (WfMSs) and, more generally, Process Management Systems (PMPs) are process-aware Information Systems (PAISs), are widely used to support many human organizational activities, ranging from well-understood, relatively stable and structures processes (supply chain management, postal delivery tracking, etc.) to processes that are more complicated, less structured and may exhibit a high degree of variation (health-care, emergency management, etc.). Every aspect of a business process involves a certain amount of knowledge which may be complex depending on the domain of interest. The adequate representation of this knowledge is determined by the modeling language used. Some processes behave in a way that is well understood, predictable and repeatable: the tasks are clearly delineated and the control flow is straightforward. Recent discussions, however, illustrate the increasing demand for solutions for knowledge-intensive processes, where these characteristics are less applicable. The actors involved in the conduct of a knowledge-intensive process have to deal with a high degree of uncertainty. Tasks may be hard to perform and the order in which they need to be performed may be highly variable. Modeling knowledge-intensive processes can be complex as it may be hard to capture at design-time what knowledge is available at run-time. In realistic environments, for example, actors lack important knowledge at execution time or this knowledge can become obsolete as the process progresses. Even if each actor (at some point) has perfect knowledge of the world, it may not be certain of its beliefs at later points in time, since tasks by other actors may change the world without those changes being perceived. Typically, a knowledge-intensive process cannot be adequately modeled by classical, state of the art process/workflow modeling approaches. In some respect there is a lack of maturity when it comes to capturing the semantic aspects involved, both in terms of reasoning about them. The main focus of the 1st International Workshop on Knowledge-intensive Business processes (KiBP 2012) was investigating how techniques from different fields, such as Artificial Intelligence (AI), Knowledge Representation (KR), Business Process Management (BPM), Service Oriented Computing (SOC), etc., can be combined with the aim of improving the modeling and the enactment phases of a knowledge-intensive process. The 1st International Workshop on Knowledge-intensive Business process (KiBP 2012) was held as part of the program of the 2012 Knowledge Representation & Reasoning International Conference (KR 2012) in Rome, Italy, in June 2012. The workshop was hosted by the Dipartimento di Ingegneria Informatica, Automatica e Gestionale Antonio Ruberti of Sapienza Universita di Roma, with financial support of the University, through grant 2010-C26A107CN9 TESTMED, and the EU Commission through the projects FP7-25888 Greener Buildings and FP7-257899 Smart Vortex. This volume contains the 5 papers accepted and presented at the workshop. Each paper was reviewed by three members of the internationally renowned Program Committee. In addition, a further paper was invted for inclusion in the workshop proceedings and for presentation at the workshop. There were two keynote talks, one by Marlon Dumas (Institute of Computer Science, University of Tartu, Estonia) on "Integrated Data and Process Management: Finally?" and the other by Yves Lesperance (Department of Computer Science and Engineering, York University, Canada) on "A Logic-Based Approach to Business Processes Customization" completed the scientific program. We would like to thank all the Program Committee members for the valuable work in selecting the papers, Andrea Marrella for his valuable work as publication and publicity chair of the workshop, and Carola Aiello and the consulting agency Consulta Umbria for the organization of this successful event.