7 resultados para Scientific community

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Canned tuna is one of the most widespread and recognizable fish commodities in the world. Over all oceans 80% of the total tuna catches are caught by purse seine fishery and in tropical waters their target species are: yellowfin (Thunnus albacares), bigeye (Thunnus obesus) and skipjack (Katsuwonus pelamis). Even if this fishing gear is claimed to be very selective, there are high levels of by-catch especially when operating under Fish Aggregating Devices (FADs). The main problem is underestimation of by-catch data. In order to solve this problem the scientific community has developed many specific programs (e.g. Observe Program) to collect data about both target species and by-catch with observers onboard. The purposes of this study are to estimate the quantity and composition of target species and by-catch by tuna purse seiner fishery operating in tropical waters and to underline a possible seasonal variability in the by-catch ratio (tunas versus by-catch). Data were collected with the French scientific program ”Observe” on board of the French tuna purse seiner “Via Avenir” during a fishing trip in the Gulf of Guinea (C-E Atlantic) from August to September 2012. Furthermore some by-catch specimens have been sampled to obtain more information about size class composition. In order to achieve those purposes we have shared our data with the French Institute of Research for the Development (IRD), which has data collected by observers onboard in the same study area. Yellowfin tuna results to be the main specie caught in all trips considered (around 71% of the total catches) especially on free swimming schools (FSC) sets. Instead skipjack tuna is the main specie caught under FADs. Different percentages of by-catch with the two fishing modes are observed: the by-catch incidence is higher on FADs sets (96.5% of total by-catch) than on FSC sets (3.5%) and the main category of by-catch is little-tuna (73%). When pooling data for both fishing sets used in purse seine fishery the overall by-catch/catch ratio is 5%, a lower level than in other fishing gears like long-lining and trawling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The interest of the scientific community towards organic pollutants in freshwater streams is fairly recent. During the past 50 years, thousands of chemicals have been synthesized and released into the general environment. Nowadays their occurrence and effects on several organism, invertebrates, fish, birds, reptiles and also humans are well documented. Because of their action, some of these chemicals have been defined as Endocrine Disrupters Compounds (EDCs) and the public health implications of these EDCs have been the subject of scientific debate. Most interestingly, among those that were noticed to have some influence and effects on the endocrine system were the estrone, the 17β-estradiol, the 17α-estradiol, the estriol, the 17α-ethinylestradiol, the testosterone and the progesterone. This project focused its attention on the 17β-estradiol. Estradiol, or more precisely, 17β-estradiol (also commonly referred to as E2) is a human sex hormone. It belongs to the class of steroid hormones. In spite of the effort to remove these substances from the effluents, the actual wastewater treatment plants are not able to degrade or inactivate these organic compounds that are continually poured in the ecosystem. Through this work a new system for the wastewater treatment was tested, to assess the decrease of the estradiol in the water. It involved the action of Chlorella vulgaris, a fresh water green microalga belonging to the family of the Chlorellaceae. This microorganism was selected for its adaptability and for its photosynthetic efficiency. To detect the decrease of the target compound in the water a CALUX bioassay analysis was chosen. Three different experiments were carried on to pursue the aim of the project. By analysing their results several aspects emerged. It was assessed the presence of EDCs inside the water used to prepare the culture media. C. vulgaris, under controlled conditions, could be efficient for this purpose, although further researches are essential to deepen the knowledge of this complex phenomenon. Ultimately by assessing the toxicity of the effluent against C. vulgaris, it was clear that at determined concentrations, it could affect the normal growth rate of this microorganism.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research for exact solutions of mixed integer problems is an active topic in the scientific community. State-of-the-art MIP solvers exploit a floating- point numerical representation, therefore introducing small approximations. Although such MIP solvers yield reliable results for the majority of problems, there are cases in which a higher accuracy is required. Indeed, it is known that for some applications floating-point solvers provide falsely feasible solutions, i.e. solutions marked as feasible because of approximations that would not pass a check with exact arithmetic and cannot be practically implemented. The framework of the current dissertation is SCIP, a mixed integer programs solver mainly developed at Zuse Institute Berlin. In the same site we considered a new approach for exactly solving MIPs. Specifically, we developed a constraint handler to plug into SCIP, with the aim to analyze the accuracy of provided floating-point solutions and compute exact primal solutions starting from floating-point ones. We conducted a few computational experiments to test the exact primal constraint handler through the adoption of two main settings. Analysis mode allowed to collect statistics about current SCIP solutions' reliability. Our results confirm that floating-point solutions are accurate enough with respect to many instances. However, our analysis highlighted the presence of numerical errors of variable entity. By using the enforce mode, our constraint handler is able to suggest exact solutions starting from the integer part of a floating-point solution. With the latter setting, results show a general improvement of the quality of provided final solutions, without a significant loss of performances.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Groundwater represents one of the most important resources of the world and it is essential to prevent its pollution and to consider remediation intervention in case of contamination. According to the scientific community the characterization and the management of the contaminated sites have to be performed in terms of contaminant fluxes and considering their spatial and temporal evolution. One of the most suitable approach to determine the spatial distribution of pollutant and to quantify contaminant fluxes in groundwater is using control panels. The determination of contaminant mass flux, requires measurement of contaminant concentration in the moving phase (water) and velocity/flux of the groundwater. In this Master Thesis a new solute flux mass measurement approach, based on an integrated control panel type methodology combined with the Finite Volume Point Dilution Method (FVPDM), for the monitoring of transient groundwater fluxes, is proposed. Moreover a new adsorption passive sampler, which allow to capture the variation of solute concentration with time, is designed. The present work contributes to the development of this approach on three key points. First, the ability of the FVPDM to monitor transient groundwater fluxes was verified during a step drawdown test at the experimental site of Hermalle Sous Argentau (Belgium). The results showed that this method can be used, with optimal results, to follow transient groundwater fluxes. Moreover, it resulted that performing FVPDM, in several piezometers, during a pumping test allows to determine the different flow rates and flow regimes that can occurs in the various parts of an aquifer. The second field test aiming to determine the representativity of a control panel for measuring mass flus in groundwater underlined that wrong evaluations of Darcy fluxes and discharge surfaces can determine an incorrect estimation of mass fluxes and that this technique has to be used with precaution. Thus, a detailed geological and hydrogeological characterization must be conducted, before applying this technique. Finally, the third outcome of this work concerned laboratory experiments. The test conducted on several type of adsorption material (Oasis HLB cartridge, TDS-ORGANOSORB 10 and TDS-ORGANOSORB 10-AA), in order to determine the optimum medium to dimension the passive sampler, highlighted the necessity to find a material with a reversible adsorption tendency to completely satisfy the request of the new passive sampling technique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.