934 resultados para small area estimation
Resumo:
La biologie de la conservation est communément associée à la protection de petites populations menacées d?extinction. Pourtant, il peut également être nécessaire de soumettre à gestion des populations surabondantes ou susceptibles d?une trop grande expansion, dans le but de prévenir les effets néfastes de la surpopulation. Du fait des différences tant quantitatives que qualitatives entre protection des petites populations et contrôle des grandes, il est nécessaire de disposer de modèles et de méthodes distinctes. L?objectif de ce travail a été de développer des modèles prédictifs de la dynamique des grandes populations, ainsi que des logiciels permettant de calculer les paramètres de ces modèles et de tester des scénarios de gestion. Le cas du Bouquetin des Alpes (Capra ibex ibex) - en forte expansion en Suisse depuis sa réintroduction au début du XXème siècle - servit d?exemple. Cette tâche fut accomplie en trois étapes : En premier lieu, un modèle de dynamique locale, spécifique au Bouquetin, fut développé : le modèle sous-jacent - structuré en classes d?âge et de sexe - est basé sur une matrice de Leslie à laquelle ont été ajoutées la densité-dépendance, la stochasticité environnementale et la chasse de régulation. Ce modèle fut implémenté dans un logiciel d?aide à la gestion - nommé SIM-Ibex - permettant la maintenance de données de recensements, l?estimation automatisée des paramètres, ainsi que l?ajustement et la simulation de stratégies de régulation. Mais la dynamique d?une population est influencée non seulement par des facteurs démographiques, mais aussi par la dispersion et la colonisation de nouveaux espaces. Il est donc nécessaire de pouvoir modéliser tant la qualité de l?habitat que les obstacles à la dispersion. Une collection de logiciels - nommée Biomapper - fut donc développée. Son module central est basé sur l?Analyse Factorielle de la Niche Ecologique (ENFA) dont le principe est de calculer des facteurs de marginalité et de spécialisation de la niche écologique à partir de prédicteurs environnementaux et de données d?observation de l?espèce. Tous les modules de Biomapper sont liés aux Systèmes d?Information Géographiques (SIG) ; ils couvrent toutes les opérations d?importation des données, préparation des prédicteurs, ENFA et calcul de la carte de qualité d?habitat, validation et traitement des résultats ; un module permet également de cartographier les barrières et les corridors de dispersion. Le domaine d?application de l?ENFA fut exploré par le biais d?une distribution d?espèce virtuelle. La comparaison à une méthode couramment utilisée pour construire des cartes de qualité d?habitat, le Modèle Linéaire Généralisé (GLM), montra qu?elle était particulièrement adaptée pour les espèces cryptiques ou en cours d?expansion. Les informations sur la démographie et le paysage furent finalement fusionnées en un modèle global. Une approche basée sur un automate cellulaire fut choisie, tant pour satisfaire aux contraintes du réalisme de la modélisation du paysage qu?à celles imposées par les grandes populations : la zone d?étude est modélisée par un pavage de cellules hexagonales, chacune caractérisée par des propriétés - une capacité de soutien et six taux d?imperméabilité quantifiant les échanges entre cellules adjacentes - et une variable, la densité de la population. Cette dernière varie en fonction de la reproduction et de la survie locale, ainsi que de la dispersion, sous l?influence de la densité-dépendance et de la stochasticité. Un logiciel - nommé HexaSpace - fut développé pour accomplir deux fonctions : 1° Calibrer l?automate sur la base de modèles de dynamique (par ex. calculés par SIM-Ibex) et d?une carte de qualité d?habitat (par ex. calculée par Biomapper). 2° Faire tourner des simulations. Il permet d?étudier l?expansion d?une espèce envahisseuse dans un paysage complexe composé de zones de qualité diverses et comportant des obstacles à la dispersion. Ce modèle fut appliqué à l?histoire de la réintroduction du Bouquetin dans les Alpes bernoises (Suisse). SIM-Ibex est actuellement utilisé par les gestionnaires de la faune et par les inspecteurs du gouvernement pour préparer et contrôler les plans de tir. Biomapper a été appliqué à plusieurs espèces (tant végétales qu?animales) à travers le Monde. De même, même si HexaSpace fut initialement conçu pour des espèces animales terrestres, il pourrait aisément être étndu à la propagation de plantes ou à la dispersion d?animaux volants. Ces logiciels étant conçus pour, à partir de données brutes, construire un modèle réaliste complexe, et du fait qu?ils sont dotés d?une interface d?utilisation intuitive, ils sont susceptibles de nombreuses applications en biologie de la conservation. En outre, ces approches peuvent également s?appliquer à des questions théoriques dans les domaines de l?écologie des populations et du paysage.<br/><br/>Conservation biology is commonly associated to small and endangered population protection. Nevertheless, large or potentially large populations may also need human management to prevent negative effects of overpopulation. As there are both qualitative and quantitative differences between small population protection and large population controlling, distinct methods and models are needed. The aim of this work was to develop theoretical models to predict large population dynamics, as well as computer tools to assess the parameters of these models and to test management scenarios. The alpine Ibex (Capra ibex ibex) - which experienced a spectacular increase since its reintroduction in Switzerland at the beginning of the 20th century - was used as paradigm species. This task was achieved in three steps: A local population dynamics model was first developed specifically for Ibex: the underlying age- and sex-structured model is based on a Leslie matrix approach with addition of density-dependence, environmental stochasticity and culling. This model was implemented into a management-support software - named SIM-Ibex - allowing census data maintenance, parameter automated assessment and culling strategies tuning and simulating. However population dynamics is driven not only by demographic factors, but also by dispersal and colonisation of new areas. Habitat suitability and obstacles modelling had therefore to be addressed. Thus, a software package - named Biomapper - was developed. Its central module is based on the Ecological Niche Factor Analysis (ENFA) whose principle is to compute niche marginality and specialisation factors from a set of environmental predictors and species presence data. All Biomapper modules are linked to Geographic Information Systems (GIS); they cover all operations of data importation, predictor preparation, ENFA and habitat suitability map computation, results validation and further processing; a module also allows mapping of dispersal barriers and corridors. ENFA application domain was then explored by means of a simulated species distribution. It was compared to a common habitat suitability assessing method, the Generalised Linear Model (GLM), and was proven better suited for spreading or cryptic species. Demography and landscape informations were finally merged into a global model. To cope with landscape realism and technical constraints of large population modelling, a cellular automaton approach was chosen: the study area is modelled by a lattice of hexagonal cells, each one characterised by a few fixed properties - a carrying capacity and six impermeability rates quantifying exchanges between adjacent cells - and one variable, population density. The later varies according to local reproduction/survival and dispersal dynamics, modified by density-dependence and stochasticity. A software - named HexaSpace - was developed, which achieves two functions: 1° Calibrating the automaton on the base of local population dynamics models (e.g., computed by SIM-Ibex) and a habitat suitability map (e.g. computed by Biomapper). 2° Running simulations. It allows studying the spreading of an invading species across a complex landscape made of variously suitable areas and dispersal barriers. This model was applied to the history of Ibex reintroduction in Bernese Alps (Switzerland). SIM-Ibex is now used by governmental wildlife managers to prepare and verify culling plans. Biomapper has been applied to several species (both plants and animals) all around the World. In the same way, whilst HexaSpace was originally designed for terrestrial animal species, it could be easily extended to model plant propagation or flying animals dispersal. As these softwares were designed to proceed from low-level data to build a complex realistic model and as they benefit from an intuitive user-interface, they may have many conservation applications. Moreover, theoretical questions in the fields of population and landscape ecology might also be addressed by these approaches.
Resumo:
Small centrifugal compressors are more and more widely used in many industrialsystems because of their higher efficiency and better off-design performance comparing to piston and scroll compressors as while as higher work coefficient perstage than in axial compressors. Higher efficiency is always the aim of the designer of compressors. In the present work, the influence of four partsof a small centrifugal compressor that compresses heavy molecular weight real gas has been investigated in order to achieve higher efficiency. Two parts concern the impeller: tip clearance and the circumferential position of the splitter blade. The other two parts concern the diffuser: the pinch shape and vane shape. Computational fluid dynamics is applied in this study. The Reynolds averaged Navier-Stokes flow solver Finflo is used. The quasi-steady approach is utilized. Chien's k-e turbulence model is used to model the turbulence. A new practical real gas model is presented in this study. The real gas model is easily generated, accuracy controllable and fairly fast. The numerical results and measurements show good agreement. The influence of tip clearance on the performance of a small compressor is obvious. The pressure ratio and efficiency are decreased as the size of tip clearance is increased, while the total enthalpy rise keeps almost constant. The decrement of the pressure ratio and efficiency is larger at higher mass flow rates and smaller at lower mass flow rates. The flow angles at the inlet and outlet of the impeller are increased as the size of tip clearance is increased. The results of the detailed flow field show that leakingflow is the main reason for the performance drop. The secondary flow region becomes larger as the size of tip clearance is increased and the area of the main flow is compressed. The flow uniformity is then decreased. A detailed study shows that the leaking flow rate is higher near the exit of the impeller than that near the inlet of the impeller. Based on this phenomenon, a new partiallyshrouded impeller is used. The impeller is shrouded near the exit of the impeller. The results show that the flow field near the exit of the impeller is greatly changed by the partially shrouded impeller, and better performance is achievedthan with the unshrouded impeller. The loading distribution on the impeller blade and the flow fields in the impeller is changed by moving the splitter of the impeller in circumferential direction. Moving the splitter slightly to the suction side of the long blade can improve the performance of the compressor. The total enthalpy rise is reduced if only the leading edge of the splitter ismoved to the suction side of the long blade. The performance of the compressor is decreased if the blade is bended from the radius direction at the leading edge of the splitter. The total pressure rise and the enthalpy rise of thecompressor are increased if pinch is used at the diffuser inlet. Among the fivedifferent pinch shape configurations, at design and lower mass flow rates the efficiency of a straight line pinch is the highest, while at higher mass flow rate, the efficiency of a concave pinch is the highest. The sharp corner of the pinch is the main reason for the decrease of efficiency and should be avoided. The variation of the flow angles entering the diffuser in spanwise direction is decreased if pinch is applied. A three-dimensional low solidity twisted vaned diffuser is designed to match the flow angles entering the diffuser. The numerical results show that the pressure recovery in the twisted diffuser is higher than in a conventional low solidity vaned diffuser, which also leads to higher efficiency of the twisted diffuser. Investigation of the detailed flow fields shows that the separation at lower mass flow rate in the twisted diffuser is later than in the conventional low solidity vaned diffuser, which leads to a possible wider flow range of the twisted diffuser.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
In 2008 the regional government of Catalonia (Spain) reduced the maximum speed limit on several stretches of congested urban motorway in the Barcelona metropolitan area to 80 km/h, while in 2009 it introduced a variable speed system on other stretches of its metropolitan motorways. We use the differences-in-differences method, which enables a policy impact to be measured under specific conditions, to assess the impact of these policies on emissions of NOx and PM10. Empirical estimation indicate that reducing the speed limit to 80 km h-1 causes a 1.7 to 3.2% increase in NOx and 5.3 to 5.9% in PM10. By contrast, the variable speed policy reduced NOx and PM10 pollution by 7.7 to 17.1% and 14.5 to 17.3%. As such, a variable speed policy appears to be a more effective environmental policy than reducing the speed limit to a maximum of 80 km/h.
Resumo:
This paper analyzes the cost implications of privatization and cooperation in the provision of solid waste services for a sample of small municipalities. In conducting this empirical analysis, a survey is first designed and administered to municipalities in the Spanish region of Aragon, and then an estimation of the determinants of service costs is undertaken, considering the possible endogeneity of delivery choices. Our findings indicate that cooperation is more effective than privatization in saving costs. Both production forms can enable small municipalities to cut costs by exploiting scale economies. However, the fact that inter-municipal cooperation involves lower transaction costs and is less likely to be affected by competition problems would seem to account for the fact that it is a more effective way of reducing costs.
Resumo:
Top predator loss is a major global problem, with a current trend in biodiversity loss towards high trophic levels that modifies most ecosystems worldwide. Most research in this area is focused on large-bodied predators, despite the high extinction risk of small-bodied freshwater fish that often act as apex consumers. Consequently, it remains unknown if intermittent streams are affected by the consequences of top-predators' extirpations. The aim of our research was to determine how this global problem affects intermittent streams and, in particular, if the loss of a small-bodied top predator (1) leads to a 'mesopredator release', affects primary consumers and changes whole community structures, and (2) triggers a cascade effect modifying the ecosystem function. To address these questions, we studied the topdown effects of a small endangered fish species, Barbus meridionalis (the Mediterranean barbel), conducting an enclosure/exclosure mesocosm experiment in an intermittent stream where B. meridionalis became locally extinct following a wildfire.We found that top predator absence led to 'mesopredator release', and also to 'prey release' despite intraguild predation, which contrasts with traditional food web theory. In addition, B. meridionalis extirpation changed whole macroinvertebrate community composition and increased total macroinvertebrate density. Regarding ecosystem function, periphyton primary production decreased in apex consumer absence. In this study, the apex consumer was functionally irreplaceable; its local extinction led to the loss of an important functional role that resulted in major changes to the ecosystem's structure and function. This study evidences that intermittent streams can be affected by the consequences of apex consumers' extinctions, and that the loss of small-bodied top predators can lead to large ecosystem changes. We recommend the reintroduction of small-bodied apex consumers to systems where they have been extirpated, to restore ecosystem structure and function.
Resumo:
The paper is focused on feasibility study and market review of small scale bioenergy heating plants in the Russian North-West region. The main focus is effective and competitive usage of low-grade wood for heating purposes in the region. As example of economical feasibility estimation it was chosen the project of reconstruction of small scale boiler plant in Leningrad region that Brofta Oy is planning to implement the nearest time. It includes calculation the payback time with and without interest, the estimation of probable investments, the evaluation of possible risks and research on the potential of small scale heating plants projects. Calculations show that the profitability of this kind of projects is high, but payback time is not very short, because of high level of initial investments. Though, the development of small scale bioenergy heating plants in the region is considered to be the best way to solve the problems of heat supply in small settlements using own biomass resources.
Resumo:
Variations in water volume in small depressions in Mediterranean salt marshes in Girona (Spain) are described and the potential causes for these variations analysed. Although the basins appear to be endorrheic, groundwater circulation is intense, as estimated from the difference between water volume observed and that expected from the balance precipitation / evaporation. The rate of variation in volume (VR = AV / VAt) may be used to estimate groundwater supply ('circulation'), since direct measurements of this parameter are impossible. Volume.conductivity figures can also be used to estimate the quantity of circulation, and to investigate the origin of water supplied to the system. The relationships between variations in the volume of water in the basins and the main causes of flooding are also analysed. Sea storms, rainfall levels and strong, dry northerly winds are suggested as the main causes of the variations in the volumes of basins. The relative importance assigned to these factors has changed, following the recent regulation of freshwater flows entering the system
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
In the last few years, the Ukrainian investment market has constantly shown strong performance and significant growth. This is primarily due to the investment attractiveness of Ukraine. From the perspective of investments in energy sector, Ukraine can be described as a country providing significant number of opportunities to multiply invested funds. But there are numbers of risks which hamper large investments. The work objective was to discover opportunities in small-scale hydropower and wind power sectors of Ukraine and more importantly to prove economic expediency of such investments. Thesis covers major of issues, concerning entering the Ukrainian power market as a foreign investor. It provides basic information about the structure of power market, the state of renewables sector in Ukraine, development of power sector in the regions, functioning of Wholesale Electricity Market, formation of electricity prices, possibilities for implementing joint Implementation mechanism, while the most attention, nevertheless, is concentrated on the opportunities in small-scale hydro and wind power sectors. Theoretical part of the study disclosed that Crimea peninsula has perfect wind conditions and could be a prospective area for wind project development. Investment analysis revealed that project profits will be excellent if green tariff for renewable energy is adopted. By the moment uncertainties about green law adoption bring additional risk to the projects and complicate any investment decision.
Resumo:
Background: Assessing of the costs of treating disease is necessary to demonstrate cost-effectiveness and to estimate the budget impact of new interventions and therapeutic innovations. However, there are few comprehensive studies on resource use and costs associated with lung cancer patients in clinical practice in Spain or internationally. The aim of this paper was to assess the hospital cost associated with lung cancer diagnosis and treatment by histology, type of cost and stage at diagnosis in the Spanish National Health Service. Methods: A retrospective, descriptive analysis on resource use and a direct medical cost analysis were performed. Resource utilisation data were collected by means of patient files from nine teaching hospitals. From a hospital budget impact perspective, the aggregate and mean costs per patient were calculated over the first three years following diagnosis or up to death. Both aggregate and mean costs per patient were analysed by histology, stage at diagnosis and cost type. Results: A total of 232 cases of lung cancer were analysed, of which 74.1% corresponded to non-small cell lung cancer (NSCLC) and 11.2% to small cell lung cancer (SCLC); 14.7% had no cytohistologic confirmation. The mean cost per patient in NSCLC ranged from 13,218 Euros in Stage III to 16,120 Euros in Stage II. The main cost components were chemotherapy (29.5%) and surgery (22.8%). Advanced disease stages were associated with a decrease in the relative weight of surgical and inpatient care costs but an increase in chemotherapy costs. In SCLC patients, the mean cost per patient was 15,418 Euros for limited disease and 12,482 Euros for extensive disease. The main cost components were chemotherapy (36.1%) and other inpatient costs (28.7%). In both groups, the Kruskall-Wallis test did not show statistically significant differences in mean cost per patient between stages. Conclusions: This study provides the costs of lung cancer treatment based on patient file reviews, with chemotherapy and surgery accounting for the major components of costs. This cost analysis is a baseline study that will provide a useful source of information for future studies on cost-effectiveness and on the budget impact of different therapeutic innovations in Spain.
Resumo:
This thesis is a preliminary study targeting South-Eastern Finland. The objective was to find out the financial and functional readiness and willingness of the small and medium-sized enterprises of the region to manufacture and sell distributed bioenergy solutions collaboratively as a business network. In this case these solutions mean small-scale (0.5 - 3 MW) woodchips-operated combined heat and power (CHP) plants. South-Eastern Finland has suffered from a decline in the recent years, mostly due to the problems of the traditionally strong industrial know-how area of the region, the paper industry. Local small and medium-sized companies will have to find new ways to survive the toughening competition. A group of 40 companies from suitable industries were selected and financial and comparative analysis was performed on them. Additionally 19 managing directors of the companies were selected for an interview to find out their views on networking, its requirements, advantages and the general interest in it. The studied companies were found to be generally in fairly good financial condition and in that sense, fit for networking activities. The interviews revealed that the companies were capable of producing all the needed elements for the plants in question, and the managers appeared to be very interested in and have a positive attitude towards such business networks. Thus it can be said that the small and medium-sized companies of the region are capable of and interested in manufacturing small bio-CHP plants as a production network.
Resumo:
The overall purpose of this thesis was to increase the knowledge on the biogeochemistry of rural acid sulphate (AS) soil environments and urban forest ecosystems near small towns in Western Finland. In addition, the potential causal relationship between the distribution of AS soils and geographical occurence of multiple sclerosis (MS) disease was assessed based on a review of existing literature and data. Acid sulphate soils, which occupy an area of approximately 17–24 million hectare worldwide, are regarded as the nastiest soils in the world. Independent of the geographical locality of these soils, they pose a great threat to their surrounding environment if disturbed. The abundant metal-rich acid drainage from Finnish AS soils, which is a result of sulphide oxidation due to artificial farmland drainage, has significant but spatially and temporally variable ecotoxicological impacts on biodiversity and community structure of fish, benthic invertebrates and macrophytes. This has resulted in mass fish kills and even eradication of sensitive fish species in affected waters. Moreover, previous investigations demonstrated significantly enriched concentrations of Co, Ni, Mn and Al, metals which are abundantly mobilised in AS soils, in agricultural crops (timothy grass and oats) and approximately 50 times higher concentrations of Al in cow milk originating from AS soils in Western Finland. Nevertheless, the results presented here demonstrate, in general, relatively moderate metal concentrations in oats and cabbage grown on AS soils in Western Finland, although some of the studied fields showed anomalous values of metals (e.g. Co and Ni) in both the soil and target plants (especially oats), similar to that of the previous investigations. The results indicated that the concentrations of Co, Ni, Mn and Zn in oats and Co and Zn in cabbage were governed by soil geochemistry as these metals were correlated with corresponding concentrations extracted from the soil by NH4Ac-EDTA and NH4Ac, respectively. The concentrations of Cu and Fe in oats and cabbage were uncorrelated to that of the easily soluble concentrations in the soils, suggesting that biological processes (e.g. plant-root processes) overshadow geochemical variation. The concentrations of K and Mg in cabbage, which showed a low spread and were strongly correlated to the NH4Ac extractable contents in the soil, were governed by both the bioavailable fractions in the topsoil and plant-uptake mechanisms. The plant´s ability to regulate its uptake of Ca and P (e.g. through root exudates) seemed to be more important than the influence of soil geochemistry. The distribution of P, K, Ca, Mg, Mn and S within humus, moss and needles in and around small towns was to a high degree controlled by biological cycling, which was indicated by the low correlation coefficients for P, K, Ca, Mg and S between humus and moss, and the low spread of these nutrients in moss and needles. The concentration variations of elements in till are mainly due to natural processes (e.g. intrusions, weathering, mineralogical variations in the bedrock). There was a strong spatial pattern for B in humus, moss and needles, which was suggested to be associated with anthropogenic emissions from nearby town centres. Geogenic dust affected the spatial distribution of Fe and Cr in moss, while natural processes governed the Fe anomaly found in the needles. The spatial accumulation patterns of Zn, Cd, Cu, Ni and Pb in humus and moss were strong and diverse, and related to current industry, the former steel industry, coal combustion, and natural geochemical processes. An intriguing Cu anomaly was found in moss. Since it was located close to a main railway line and because the railway line´s electric cables are made of Cu, it was suggested that the reason for the Cu anomaly is corrosion of these cables. In Western Finland, where AS soils are particularly abundant and enrich the metal concentrations of stream waters, cow milk and to some extent crops, an environmental risk assessment would be motivated to elucidate if the metal dispersion affect human health. Within this context, a topic of concern is the distribution of multiple sclerosis as high MS prevalence rates are found in the main area of AS soils. Regionally, the AS soil type in the Seinäjoki area has been demonstrated to be very severe in terms of metal leaching, this area also shows one of the highest MS rates reported worldwide. On a local scale, these severe AS soil types coincide well with the corresponding MS clustering along the Kyrönjoki River in Seinäjoki. There are reasons to suspect that these spatial correlations are causal, as multiple sclerosis has been suggested to result from a combination of genetic and environmental factors.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
Most studies on measures of transpiration of plants, especially woody fruit, relies on methods of heat supply in the trunk. This study aimed to calibrate the Thermal Dissipation Probe Method (TDP) to estimate the transpiration, study the effects of natural thermal gradients and determine the relation between outside diameter and area of xylem in 'Valencia' orange young plants. TDP were installed in 40 orange plants of 15 months old, planted in boxes of 500 L, in a greenhouse. It was tested the correction of the natural thermal differences (DTN) for the estimation based on two unheated probes. The area of the conductive section was related to the outside diameter of the stem by means of polynomial regression. The equation for estimation of sap flow was calibrated having as standard lysimeter measures of a representative plant. The angular coefficient of the equation for estimating sap flow was adjusted by minimizing the absolute deviation between the sap flow and daily transpiration measured by lysimeter. Based on these results, it was concluded that the method of TDP, adjusting the original calibration and correction of the DTN, was effective in transpiration assessment.