962 resultados para Will scale
Resumo:
Climate change and its consequences seem to be increasingly evident in our daily lives. However, is it possible for students to identify a relationship between these large-scale events and the chemistry taught in the classroom? The aim of the present work is to demonstrate that chemistry can assist in elucidating important environmental issues. Simple experiments are used to demonstrate the mechanism of cloud formation, as well as the influence of anthropogenic and natural emissions on the precipitation process. The experiments presented show the way in which particles of soluble salts commonly found in the environment can absorb water in the atmosphere and influence cloud formation.
Resumo:
Assuming that neutrinos are Majorana particles, in a three-generation framework, current and future neutrino oscillation experiments can determine six out of the nine parameters which fully describe the structure of the neutrino mass matrix. We try to clarify the interplay among the remaining parameters, the absolute neutrino mass scale and two CP violating Majorana phases, and how they can be accessed by future neutrinoless double beta (0vυββ) decay experiments, for the normal as well as for the inverted order of the neutrino mass spectrum. Assuming the oscillation parameters to be in the range presently allowed by atmospheric, solar, reactor, and accelerator neutrino experiments, we quantitatively estimate the bounds on m 0, the lightest neutrino mass, that can be inferred if the next generation 0υββ decay experiments can probe the effective Majorana mass (m ee) down to ∼1 meV. In this context we conclude that in the case that neutrinos are Majorana particles, (a) if m 0≳300 meV, i.e., within the range directly attainable by future laboratory experiments as well as astrophysical observations, then m ee≳30 meV must be observed, (b) if m 0 ≤ 300 meV, results from future 0υββ decay experiments combined with stringent bounds on the neutrino oscillation parameters, especially the solar ones, will place much stronger limits on the allowed values of m 0 than these direct experiments. For instance, if a positive signal is observed around m ee = 10 meV, we estimate 3≲m 0/meV≲65 at 95% C.L.; on the other hand, if no signal is observed down to m ee = 10 meV, then m 0≲55 meV at 95% C.L.
Resumo:
Many recent survival studies propose modeling data with a cure fraction, i.e., data in which part of the population is not susceptible to the event of interest. This event may occur more than once for the same individual (recurrent event). We then have a scenario of recurrent event data in the presence of a cure fraction, which may appear in various areas such as oncology, finance, industries, among others. This paper proposes a multiple time scale survival model to analyze recurrent events using a cure fraction. The objective is analyzing the efficiency of certain interventions so that the studied event will not happen again in terms of covariates and censoring. All estimates were obtained using a sampling-based approach, which allows information to be input beforehand with lower computational effort. Simulations were done based on a clinical scenario in order to observe some frequentist properties of the estimation procedure in the presence of small and moderate sample sizes. An application of a well-known set of real mammary tumor data is provided.
Resumo:
Background Current recommendations for antithrombotic therapy after drug-eluting stent (DES) implantation include prolonged dual antiplatelet therapy (DAPT) with aspirin and clopidogrel >= 12 months. However, the impact of such a regimen for all patients receiving any DES system remains unclear based on scientific evidence available to date. Also, several other shortcomings have been identified with prolonged DAPT, including bleeding complications, compliance, and cost. The second-generation Endeavor zotarolimus-eluting stent (E-ZES) has demonstrated efficacy and safety, despite short duration DAPT (3 months) in the majority of studies. Still, the safety and clinical impact of short-term DAPT with E-ZES in the real world is yet to be determined. Methods The OPTIMIZE trial is a large, prospective, multicenter, randomized (1: 1) non-inferiority clinical evaluation of short-term (3 months) vs long-term (12-months) DAPT in patients undergoing E-ZES implantation in daily clinical practice. Overall, 3,120 patients were enrolled at 33 clinical sites in Brazil. The primary composite endpoint is death (any cause), myocardial infarction, cerebral vascular accident, and major bleeding at 12-month clinical follow-up post-index procedure. Conclusions The OPTIMIZE clinical trial will determine the clinical implications of DAPT duration with the second generation E-ZES in real-world patients undergoing percutaneous coronary intervention. (Am Heart J 2012;164:810-816.e3.)
Resumo:
A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above 10(18) eV at the Pierre Auger Observatory is presented. This search is performed as a function of both declination and right ascension in several energy ranges above 10(18) eV, and reported in terms of dipolar and quadrupolar coefficients. Within the systematic uncertainties, no significant deviation from isotropy is revealed. Assuming that any cosmic-ray anisotropy is dominated by dipole and quadrupole moments in this energy range, upper limits on their amplitudes are derived. These upper limits allow us to test the origin of cosmic rays above 10(18) eV from stationary Galactic sources densely distributed in the Galactic disk and predominantly emitting light particles in all directions.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Resumo:
Too Big to Ignore (TBTI; www.toobigtoignore.net) is a research network and knowledge mobilization partnership established to elevate the profile of small-scale fisheries (SSF), to argue against their marginalization in national and international policies, and to develop research and governance capacity to address global fisheries challenges. Network participants and partners are conducting global and comparative analyses, as well as in-depth studies of SSF in the context of local complexity and dynamics, along with a thorough examination of governance challenges, to encourage careful consideration of this sector in local, regional and global policy arenas. Comprising 15 partners and 62 researchers from 27 countries, TBTI conducts activities in five regions of the world. In Latin America and the Caribbean (LAC) region, we are taking a participative approach to investigate and promote stewardship and self-governance in SSF, seeking best practices and success stories that could be replicated elsewhere. As well, the region will focus to promote sustainable livelihoods of coastal communities. Key activities include workshops and stakeholder meetings, facilitation of policy dialogue and networking, as well as assessing local capacity needs and training. Currently, LAC members are putting together publications that examine key issues concerning SSF in the region and best practices, with a first focus on ecosystem stewardship. Other planned deliverables include comparative analysis, a regional profile on the top research issues on SSF, and a synthesis of SSF knowledge in LAC
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
The production, segregation and migration of melt and aqueous fluids (henceforth called liquid) plays an important role for the transport of mass and energy within the mantle and the crust of the Earth. Many properties of large-scale liquid migration processes such as the permeability of a rock matrix or the initial segregation of newly formed liquid from the host-rock depends on the grain-scale distribution and behaviour of liquid. Although the general mechanisms of liquid distribution at the grain-scale are well understood, the influence of possibly important modifying processes such as static recrystallization, deformation, and chemical disequilibrium on the liquid distribution is not well constrained. For this thesis analogue experiments were used that allowed to investigate the interplay of these different mechanisms in-situ. In high-temperature environments where melts are produced, the grain-scale distribution in “equilibrium” is fully determined by the liquid fraction and the ratio between the solid-solid and the solid-liquid surface energy. The latter is commonly expressed as the dihedral or wetting angle between two grains and the liquid phase (Chapter 2). The interplay of this “equilibrium” liquid distribution with ongoing surface energy driven recrystallization is investigated in Chapter 4 and 5 with experiments using norcamphor plus ethanol liquid. Ethanol in contact with norcamphor forms a wetting angle of about 25°, which is similar to reported angles of rock-forming minerals in contact with silicate melt. The experiments in Chapter 4 show that previously reported disequilibrium features such as trapped liquid lenses, fully-wetted grain boundaries, and large liquid pockets can be explained by the interplay of the liquid with ongoing recrystallization. Closer inspection of dihedral angles in Chapter 5 reveals that the wetting angles are themselves modified by grain coarsening. Ongoing recrystallization constantly moves liquid-filled triple junctions, thereby altering the wetting angles dynamically as a function of the triple junction velocity. A polycrystalline aggregate will therefore always display a range of equilibrium and dynamic wetting angles at raised temperature, rather than a single wetting angle as previously thought. For the deformation experiments partially molten KNO3–LiNO3 experiments were used in addition to norcamphor–ethanol experiments (Chapter 6). Three deformation regimes were observed. At a high bulk liquid fraction >10 vol.% the aggregate deformed by compaction and granular flow. At a “moderate” liquid fraction, the aggregate deformed mainly by grain boundary sliding (GBS) that was localized into conjugate shear zones. At a low liquid fraction, the grains of the aggregate formed a supporting framework that deformed internally by crystal plastic deformation or diffusion creep. Liquid segregation was most efficient during framework deformation, while GBS lead to slow liquid segregation or even liquid dispersion in the deforming areas.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
The Regional Park Corno alle Scale, while often criticized for a lack of effort focussing on attracting tourism to the area, still maintains a vast potential for satisfying the visitors' curiosity for areas of natural beauty, outdoor activities, hand crafted artefacts, and local cuisine. With the intent of promoting the area of Corno alle scale in a more comprehensive and appealing fashion, this paper has two main parts. Four brochures have been translated detailing the history of the villages dotted around the regional park, their main features, the local flora and fauna, the full range of outdoor activities available in the area, and also the main seasonal attraction, the ski resort. Secondly the translation strategies will be commented.
Resumo:
PhEDEx, the CMS transfer management system, during the first LHC Run has moved about 150 PB and currently it is moving about 2.5 PB of data per week over the Worldwide LHC Computing Grid (WLGC). It was designed to complete each transfer required by users at the expense of the waiting time necessary for its completion. For this reason, after several years of operations, data regarding transfer latencies has been collected and stored into log files containing useful analyzable informations. Then, starting from the analysis of several typical CMS transfer workflows, a categorization of such latencies has been made with a focus on the different factors that contribute to the transfer completion time. The analysis presented in this thesis will provide the necessary information for equipping PhEDEx in the future with a set of new tools in order to proactively identify and fix any latency issues. PhEDEx, il sistema di gestione dei trasferimenti di CMS, durante il primo Run di LHC ha trasferito all’incirca 150 PB ed attualmente trasferisce circa 2.5 PB di dati alla settimana attraverso la Worldwide LHC Computing Grid (WLCG). Questo sistema è stato progettato per completare ogni trasferimento richiesto dall’utente a spese del tempo necessario per il suo completamento. Dopo svariati anni di operazioni con tale strumento, sono stati raccolti dati relativi alle latenze di trasferimento ed immagazzinati in log files contenenti informazioni utili per l’analisi. A questo punto, partendo dall’analisi di una ampia mole di trasferimenti in CMS, è stata effettuata una suddivisione di queste latenze ponendo particolare attenzione nei confronti dei fattori che contribuiscono al tempo di completamento del trasferimento. L’analisi presentata in questa tesi permetterà di equipaggiare PhEDEx con un insieme di utili strumenti in modo tale da identificare proattivamente queste latenze e adottare le opportune tattiche per minimizzare l’impatto sugli utenti finali.
Resumo:
The intestinal ecosystem is formed by a complex, yet highly characteristic microbial community. The parameters defining whether this community permits invasion of a new bacterial species are unclear. In particular, inhibition of enteropathogen infection by the gut microbiota ( = colonization resistance) is poorly understood. To analyze the mechanisms of microbiota-mediated protection from Salmonella enterica induced enterocolitis, we used a mouse infection model and large scale high-throughput pyrosequencing. In contrast to conventional mice (CON), mice with a gut microbiota of low complexity (LCM) were highly susceptible to S. enterica induced colonization and enterocolitis. Colonization resistance was partially restored in LCM-animals by co-housing with conventional mice for 21 days (LCM(con21)). 16S rRNA sequence analysis comparing LCM, LCM(con21) and CON gut microbiota revealed that gut microbiota complexity increased upon conventionalization and correlated with increased resistance to S. enterica infection. Comparative microbiota analysis of mice with varying degrees of colonization resistance allowed us to identify intestinal ecosystem characteristics associated with susceptibility to S. enterica infection. Moreover, this system enabled us to gain further insights into the general principles of gut ecosystem invasion by non-pathogenic, commensal bacteria. Mice harboring high commensal E. coli densities were more susceptible to S. enterica induced gut inflammation. Similarly, mice with high titers of Lactobacilli were more efficiently colonized by a commensal Lactobacillus reuteri(RR) strain after oral inoculation. Upon examination of 16S rRNA sequence data from 9 CON mice we found that closely related phylotypes generally display significantly correlated abundances (co-occurrence), more so than distantly related phylotypes. Thus, in essence, the presence of closely related species can increase the chance of invasion of newly incoming species into the gut ecosystem. We provide evidence that this principle might be of general validity for invasion of bacteria in preformed gut ecosystems. This might be of relevance for human enteropathogen infections as well as therapeutic use of probiotic commensal bacteria.
Resumo:
Since September 2000, when world leaders agreed on time-bound, measurable goals to reduce extreme poverty, hunger, illiteracy, and disease while fostering gender equality and ensuring environmental sustainability, the Millennium Development Goals (MDGs) have increasingly come to dominate the policy objectives of many states and development agencies. The concern has been raised that the tight timeframe and financial restrictions might force governments to invest in the more productive sectors, thus compromising the quality and sustainability of development efforts. In the long term, this may lead to even greater inequality, especially between geographical regions and social strata. Hence people living in marginal areas, for example in remote mountain regions, and minority peoples risk being disadvantaged by this internationally agreed agenda. Strategies to overcome hunger and poverty in their different dimensions in mountain areas need to focus on strengthening the economy of small-scale farmers, while also fostering the sustainable use of natural resources, taking into consideration their multifunctionality.