960 resultados para Separating of variables
Resumo:
Lo studio che la candidata ha elaborato nel progetto del Dottorato di ricerca si inserisce nel complesso percorso di soluzione del problema energetico che coinvolge necessariamente diverse variabili: economiche, tecniche, politiche e sociali L’obiettivo è di esprimere una valutazione in merito alla concreta “convenienza” dello sfruttamento delle risorse rinnovabili. Il percorso scelto è stato quello di analizzare alcuni impianti di sfruttamento, studiare il loro impatto sull’ambiente ed infine metterli a confronto. Questo ha consentito di trovare elementi oggettivi da poter valutare. In particolare la candidata ha approfondito il tema dello sfruttamento delle risorse “biomasse” analizzando nel dettaglio alcuni impianti in essere nel Territorio della Regione Emilia-Romagna: impianti a micro filiera, filiera corta e filiera lunga. Con la collaborazione di Arpa Emilia-Romagna, Centro CISA e dell’Associazione Prof. Ciancabilla, è stata fatta una scelta degli impianti da analizzare: a micro filiera: impianto a cippato di Castel d’Aiano, a filiera corta: impianto a biogas da biomassa agricola “Mengoli” di Castenaso, a filiera lunga: impianto a biomasse solide “Tampieri Energie” di Faenza. Per quanto riguarda la metodologia di studio utilizzata è stato effettuato uno studio di Life Cycle Assesment (LCA) considerando il ciclo di vita degli impianti. Tramite l’utilizzo del software “SimaPro 6.0” si sono ottenuti i risultati relativi alle categorie di impatto degli impianti considerando i metodi “Eco Indicator 99” ed “Edip Umip 96”. Il confronto fra i risultati dell’analisi dei diversi impianti non ha portato a conclusioni di carattere generale, ma ad approfondite valutazioni specifiche per ogni impianto analizzato, considerata la molteplicità delle variabili di ogni realtà, sia per quanto riguarda la dimensione/scala (microfiliera, filiera corta e filiera lunga) che per quanto riguarda le biomasse utilizzate.
Resumo:
Questa tesi è dedicata alla qualità dell'alimento ittico in tre delle sue possibili accezioni. Dopo aver spiegato il complicato rapporto del consumatore con gli alimenti ittici e come l'Unione Europea abbia cercato di fare chiarezza al riguardo, gli argomenti di discussione saranno: Autenticazione d'origine La polpa di 160 esemplari di spigola (Dicentrachus labrax), suddivisi tra selvatici, allevati intensivamente e allevati estensivamente, provenienti dall'Italia e dall'estero per un totale di 18 fonti indagate, è stati analizzata individualmente per caratterizzarne la componente lipidica, isotopica e minerale e verificare le potenzialità di queste informazioni ai fini della autenticazione di origine in senso lato. Stima della Freshness Quality Numerosi lotti di seppia (Sepia officinalis), nasello (Merluccius merluccius) e triglia di fango (Mullus barbatus) sono stati sottoposti a due possibili modalità di stoccaggio sotto ghiaccio fondente, per indagare come, nell’arco della loro vita commerciale, ne evolvessero importanti connotati chimici (cataboliti dell’ATP e loro rapporti), fisici (proprietà dielettriche dei tessuti) e sensoriali (Quality Index Methods specie-specifici. Studio del profilo nutrizionale La componente lipidica di numerosi lotti di mazzancolla (Penaeus kerathurus), canocchia (Squilla mantis) e seppia (Sepia officinalis) è stata caratterizzata allo stato crudo e dopo cottura secondo tecniche “dedicate” per stabilire il contributo di queste matrici come fonte di acidi grassi polinsaturi della serie omega 3 e per pervenire alla determinazione dei loro coefficienti di ritenzione vera.
Resumo:
Questo lavoro si occupa di studiare l’effetto delle rappresentazioni sociali della musica degli studenti universitari che diventeranno insegnanti di scuola dell’infanzia e in particolare i cambiamenti che intervengono durante il periodo di formazione universitaria sia italiana sia venezuelana. Obiettivo fondamentale è quindi realizzare un’analisi comparativa sulle seguenti tematiche: bambino musicale, competenze dell’insegnante e finalità dell’educazione musicale. Questo lavoro si è inserito all’interno del progetto “Il sapere musicale come rappresentazione sociale” (Addessi-Carugati 2010). L’ipotesi guida è che le concezioni implicite della musica funzionino come rappresentazioni sociali che influenzano le pratiche dell’insegnamento e dell’educazione musicale. Il primo capitolo, affronta i temi dei bambini, degli insegnanti e dell’educazione musicale nella scuola dell’infanzia in Italia e Venezuela. Nel secondo vengono presentati gli studi sui saperi musicali; la teoria delle rappresentazioni sociali (Moscovici 1981) e il progetto pilota realizzato presso l’Università di Bologna “Il sapere musicale come Rappresentazione Sociale”. Il capitolo successivo presenta l'analisi e l'interpretazione dell’indagine empirica effettuata su un gruppo di studenti dei corsi di formazione per insegnanti dell’Università di Mérida (Venezuela). Nel quarto capitolo si sviluppano riflessioni e discussioni riguardo i risultati dello studio comparativo; i piani e programmi di studio universitari e il profilo professionale musicale dell’insegnante. Le conclusioni finali illustrano come l’ipotesi iniziale sia effettivamente confermata: dall’analisi e interpretazione dei dati sembra che le concezioni implicite sui saperi musicali possedute dagli studenti influiscano sulla loro pratica professionale in qualità di futuri insegnanti. Si è anche osservato che le differenze incontrate sembrano essere dovute ai diversi tipi di variabili del contesto dove si trova l’insegnante di educazione musicale; e soprattutto ai significati espressi dai programmi di studi, dai contenuti didattici diversi, dai contesti sociali e culturali e dal curriculum universitario.
Resumo:
Die vorliegende Arbeit untersucht den Zusammenhang zwischen Skalen in Systemen weicher Materie, der für Multiskalen-Simulationen eine wichtige Rolle spielt. Zu diesem Zweck wurde eine Methode entwickelt, die die Approximation der Separierbarkeit von Variablen für die Molekulardynamik und ähnliche Anwendungen bewertet. Der zweite und größere Teil dieser Arbeit beschäftigt sich mit der konzeptionellen und technischen Erweiterung des Adaptive Resolution Scheme'' (AdResS), einer Methode zur gleichzeitigen Simulation von Systemen mit mehreren Auflösungsebenen. Diese Methode wurde auf Systeme erweitert, in denen klassische und quantenmechanische Effekte eine Rolle spielen.rnrnDie oben genannte erste Methode benötigt nur die analytische Form der Potentiale, wie sie die meisten Molekulardynamik-Programme zur Verfügung stellen. Die Anwendung der Methode auf ein spezielles Problem gibt bei erfolgreichem Ausgang einen numerischen Hinweis auf die Gültigkeit der Variablenseparation. Bei nicht erfolgreichem Ausgang garantiert sie, dass keine Separation der Variablen möglich ist. Die Methode wird exemplarisch auf ein zweiatomiges Molekül auf einer Oberfläche und für die zweidimensionale Version des Rotational Isomer State (RIS) Modells einer Polymerkette angewandt.rnrnDer zweite Teil der Arbeit behandelt die Entwicklung eines Algorithmus zur adaptiven Simulation von Systemen, in denen Quanteneffekte berücksichtigt werden. Die Quantennatur von Atomen wird dabei in der Pfadintegral-Methode durch einen klassischen Polymerring repräsentiert. Die adaptive Pfadintegral-Methode wird zunächst für einatomige Flüssigkeiten und tetraedrische Moleküle unter normalen thermodynamischen Bedingungen getestet. Schließlich wird die Stabilität der Methode durch ihre Anwendung auf flüssigen para-Wasserstoff bei niedrigen Temperaturen geprüft.
Resumo:
The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.
Resumo:
Forest models are tools for explaining and predicting the dynamics of forest ecosystems. They simulate forest behavior by integrating information on the underlying processes in trees, soil and atmosphere. Bayesian calibration is the application of probability theory to parameter estimation. It is a method, applicable to all models, that quantifies output uncertainty and identifies key parameters and variables. This study aims at testing the Bayesian procedure for calibration to different types of forest models, to evaluate their performances and the uncertainties associated with them. In particular,we aimed at 1) applying a Bayesian framework to calibrate forest models and test their performances in different biomes and different environmental conditions, 2) identifying and solve structure-related issues in simple models, and 3) identifying the advantages of additional information made available when calibrating forest models with a Bayesian approach. We applied the Bayesian framework to calibrate the Prelued model on eight Italian eddy-covariance sites in Chapter 2. The ability of Prelued to reproduce the estimated Gross Primary Productivity (GPP) was tested over contrasting natural vegetation types that represented a wide range of climatic and environmental conditions. The issues related to Prelued's multiplicative structure were the main topic of Chapter 3: several different MCMC-based procedures were applied within a Bayesian framework to calibrate the model, and their performances were compared. A more complex model was applied in Chapter 4, focusing on the application of the physiology-based model HYDRALL to the forest ecosystem of Lavarone (IT) to evaluate the importance of additional information in the calibration procedure and their impact on model performances, model uncertainties, and parameter estimation. Overall, the Bayesian technique proved to be an excellent and versatile tool to successfully calibrate forest models of different structure and complexity, on different kind and number of variables and with a different number of parameters involved.
Resumo:
Phononic crystals, capable to block or direct the propagation of elastic/acoustic waves, have attracted increasing interdisciplinary interest across condensed matter physics and materials science. As of today, no generalized full description of elastic wave propagation in phononic structures is available, mainly due to the large number of variables determining the band diagram. Therefore, this thesis aims for a deeper understanding of the fundamental concepts governing wave propagation in mesoscopic structures by investigation of appropriate model systems. The phononic dispersion relation at hypersonic frequencies is directly investigated by the non-destructive technique of high-resolution spontaneous Brillouin light scattering (BLS) combined with computational methods. Due to the vector nature of the elastic wave propagation, we first studied the hypersonic band structure of hybrid superlattices. These 1D phononic crystals composed of alternating layers of hard and soft materials feature large Bragg gaps. BLS spectra are sensitive probes of the moduli, photo-elastic constants and structural parameters of the constituent components. Engineering of the band structure can be realized by introduction of defects. Here, cavity layers are employed to launch additional modes that modify the dispersion of the undisturbed superlattice, with extraordinary implications to the band gap region. Density of states calculations in conjunction with the associated deformation allow for unambiguous identication of surface and cavity modes, as well as their interaction with adjacent defects. Next, the role of local resonances in phononic systems is explored in 3D structures based on colloidal particles. In turbid media BLS records the particle vibration spectrum comprising resonant modes due to the spatial confinement of elastic energy. Here, the frequency and lineshapes of the particle eigenmodes are discussed as function of increased interaction and departure from spherical symmetry. The latter is realized by uniaxial stretching of polystyrene spheres, that can be aligned in an alternating electric field. The resulting spheroidal crystals clearly exhibit anisotropic phononic properties. Establishing reliable predictions of acoustic wave propagation, necessary to advance, e.g., optomechanics and phononic devices is the ultimate aim of this thesis.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
The objective for this thesis is to outline a Performance-Based Engineering (PBE) framework to address the multiple hazards of Earthquake (EQ) and subsequent Fire Following Earthquake (FFE). Currently, fire codes for the United States are largely empirical and prescriptive in nature. The reliance on prescriptive requirements makes quantifying sustained damage due to fire difficult. Additionally, the empirical standards have resulted from individual member or individual assembly furnace testing, which have been shown to differ greatly from full structural system behavior. The very nature of fire behavior (ignition, growth, suppression, and spread) is fundamentally difficult to quantify due to the inherent randomness present in each stage of fire development. The study of interactions between earthquake damage and fire behavior is also in its infancy with essentially no available empirical testing results. This thesis will present a literature review, a discussion, and critique of the state-of-the-art, and a summary of software currently being used to estimate loss due to EQ and FFE. A generalized PBE framework for EQ and subsequent FFE is presented along with a combined hazard probability to performance objective matrix and a table of variables necessary to fully implement the proposed framework. Future research requirements and summary are also provided with discussions of the difficulties inherent in adequately describing the multiple hazards of EQ and FFE.
Resumo:
Road Ecology is a relatively new sub-discipline of ecology that focuses on understanding the interactions between road systems and the natural environment. Wildlife crossings that allow animals to safely cross human-made barri-ers such as roads, are intended not only to reduce animal-vehicle collisions, but ideally to provide connectivity of habitat areas, combating habitat fragmentation. Wildlife mitigation strategies to improve the permeability of our infrastructure can include a combination of structures (overpasses/underpasses), at-grade crossings, fencing, animal-detection systems, and signage. One size does not fit all and solutions must be considered on a case-by-case ba-sis. Often, the feasibility of the preferred mitigation solution depends on a combination of variables including road geometrics, topography, traffic patterns, funding allocations, adjacent land use and landowner cooperation, the target wildlife species, their movement patterns, and habitat distribution. Joe and Deb will speak to the current road ecolo-gy practices in Montana and some real-world applications from the Department of Transportation.
Resumo:
Adult honey bees are maintained in vitro in laboratory cages for a variety of purposes. For example, researchers may wish to perform experiments on honey bees caged individually or in groups to study aspects of parasitology, toxicology, or physiology under highly controlled conditions, or they may cage whole frames to obtain newly emerged workers of known age cohorts. Regardless of purpose, researchers must manage a number of variables, ranging from selection of study subjects (e.g. honey bee subspecies) to experimental environment (e.g. temperature and relative humidity). Although decisions made by researchers may not necessarily jeopardize the scientific rigour of an experiment, they may profoundly affect results, and may make comparisons with similar, but independent, studies difficult. Focusing primarily on workers, we provide recommendations for maintaining adults under in vitro laboratory conditions, whilst acknowledging gaps in our understanding that require further attention. We specifically describe how to properly obtain honey bees, and how to choose appropriate cages, incubator conditions, and food to obtain biologically relevant and comparable experimental results. Additionally, we provide broad recommendations for experimental design and statistical analyses of data that arises from experiments using caged honey bees. The ultimate goal of this, and of all COLOSS BEEBOOK papers, is not to stifle science with restrictions, but rather to provide researchers with the appropriate tools to generate comparable data that will build upon our current understanding of honey bees.
Resumo:
A search for supersymmetric particles in final states with zero, one, and two leptons, with and without jets identified as originating from b-quarks, in 4.7 fb(-1) of root s = 7 TeV pp collisions produced by the Large Hadron Collider and recorded by the ATLAS detector is presented. The search uses a set of variables carrying information on the event kinematics transverse and parallel to the beam line that are sensitive to several topologies expected in supersymmetry. Mutually exclusive final states are defined, allowing a combination of all channels to increase the search sensitivity. No deviation from the Standard Model expectation is observed. Upper limits at 95 % confidence level on visible cross-sections for the production of new particles are extracted. Results are interpreted in the context of the constrained minimal supersymmetric extension to the Standard Model and in supersymmetry-inspired models with diverse, high-multiplicity final states.
Resumo:
Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
BACKGROUND There are no specific recommendations for the design and reporting of studies of children with fever and neutropenia (FN). As a result, there is marked heterogeneity in the variables and outcomes that are reported and new definitions continue to emerge. These inconsistencies hinder the ability of researchers and clinicians to compare, contrast and combine results. The objective was to achieve expert consensus on a core set of variables and outcomes that should be measured and reported, as a minimum, in pediatric FN studies. PROCEDURE The Delphi method was used to achieve consensus among an international group of clinicians, pharmacists, researchers, and patient representatives. Four surveys focusing on (i) the identification of a core set of variables and outcomes; and (ii) definitions of these variables and outcomes, were administered electronically. Consensus was predefined as more than 80% agreement on any statement. RESULTS There were forty-five survey participants and the response rate ranged between 84 and 96%. There was consensus on eight core variables and 10 core outcomes that should be collected and reported in all studies of children with FN. Consensus definitions were identified for all of the core outcomes. CONCLUSION Using the Delphi method, expert consensus on a set of core variables and outcomes, and their corresponding definitions, was achieved. These core sets represent the minimum that should be collected and reported in all studies of children with FN. This will promote collaboration and ensure consistency and comparability between studies.