974 resultados para Mismatched uncertainties


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric aerosol particles have a significant impact on air quality, human health and global climate. The climatic effects of secondary aerosol are currently among the largest uncertainties limiting the scientific understanding of future and past climate changes. To better estimate the climatic importance of secondary aerosol particles, detailed information on atmospheric particle formation mechanisms and the vapours forming the aerosol is required. In this thesis we studied these issues by applying novel instrumentation in a boreal forest to obtain direct information on the very first steps of atmospheric nucleation and particle growth. Additionally, we used detailed laboratory experiments and process modelling to determine condensational growth properties, such as saturation vapour pressures, of dicarboxylic acids, which are organic acids often found in atmospheric samples. Based on our studies, we came to four main conclusions: 1) In the boreal forest region, both sulphurous compounds and organics are needed for secondary particle formation, the previous contributing mainly to particle formation and latter to growth; 2) A persistent pool of molecular clusters, both neutral and charged, is present and participates in atmospheric nucleation processes in boreal forests; 3) Neutral particle formation seems to dominate over ion-mediated mechanisms, at least in the boreal forest boundary layer; 4) The subcooled liquid phase saturation vapour pressures of C3-C9 dicarboxylic acids are of the order of 1e-5 1e-3 Pa at atmospheric temperatures, indicating that a mixed pre-existing particulate phase is required for their condensation in atmospheric conditions. The work presented in this thesis gives tools to better quantify the aerosol source provided by secondary aerosol formation. The results are particularly useful when estimating, for instance, anthropogenic versus biogenic influences and the fractions of secondary aerosol formation explained by neutral or ion-mediated nucleation mechanisms, at least in environments where the average particle formation rates are of the order of some tens of particles per cubic centimeter or lower. However, as the factors driving secondary particle formation are likely to vary depending on the environment, measurements on atmospheric nucleation and particle growth are needed from around the world to be able to better describe the secondary particle formation, and assess its climatic effects on a global scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes methods for the reliable identification of hadronically decaying tau leptons in the search for heavy Higgs bosons of the minimal supersymmetric standard model of particle physics (MSSM). The identification of the hadronic tau lepton decays, i.e. tau-jets, is applied to the gg->bbH, H->tautau and gg->tbH+, H+->taunu processes to be searched for in the CMS experiment at the CERN Large Hadron Collider. Of all the event selections applied in these final states, the tau-jet identification is the single most important event selection criterion to separate the tiny Higgs boson signal from a large number of background events. The tau-jet identification is studied with methods based on a signature of a low charged track multiplicity, the containment of the decay products within a narrow cone, an isolated electromagnetic energy deposition, a non-zero tau lepton flight path, the absence of electrons, muons, and neutral hadrons in the decay signature, and a relatively small tau lepton mass compared to the mass of most hadrons. Furthermore, in the H+->taunu channel, helicity correlations are exploited to separate the signal tau jets from those originating from the W->taunu decays. Since many of these identification methods rely on the reconstruction of charged particle tracks, the systematic uncertainties resulting from the mechanical tolerances of the tracking sensor positions are estimated with care. The tau-jet identification and other standard selection methods are applied to the search for the heavy neutral and charged Higgs bosons in the H->tautau and H+->taunu decay channels. For the H+->taunu channel, the tau-jet identification is redone and optimized with a recent and more detailed event simulation than previously in the CMS experiment. Both decay channels are found to be very promising for the discovery of the heavy MSSM Higgs bosons. The Higgs boson(s), whose existence has not yet been experimentally verified, are a part of the standard model and its most popular extensions. They are a manifestation of a mechanism which breaks the electroweak symmetry and generates masses for particles. Since the H->tautau and H+->taunu decay channels are important for the discovery of the Higgs bosons in a large region of the permitted parameter space, the analysis described in this thesis serves as a probe for finding out properties of the microcosm of particles and their interactions in the energy scales beyond the standard model of particle physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the 1990s, European policy strategies have stressed the mutual responsibility and joint action of all societal branches in preventing social problems. Network policy is an integral part of the new governance that generates a new kind of dependency between the state and civil society in formulating and adhering to policy goals. Using empirical group interview data collected in Helsinki, the capital of Finland, this case study explores local multi-agency groups and their efforts to prevent the exclusion of children and young people. These groups consist mainly of professionals from the social office, youth clubs and schools. The study shows that these multi-agency groups serve as forums for professional negotiation where the intervention dilemma of liberal society can be addressed: the question of when it is justified and necessary for an authority or network to intervene in the life of children and their families, and how this is to be done. An element of tension in multi-agency prevention is introduced by the fact that its objectives and means are anchored both in the old tradition of the welfare state and in communitarian rhetoric. Thus multi-agency groups mend deficiencies in wellbeing and normalcy while at the same time try to co-ordinate the creation of the new community, which will hopefully reduce the burden on the public sector. Some of the professionals interviewed were keen to see new and even forceful interventions to guide the youth or to compel parents to assume their responsibilities. In group discussions, this approach often met resistance. The deeper the social problems that the professionals worked with, the more solidarity they showed for the families or the young people in need. Nothing seems to assure professionals and to legitimise their professional position better than advocating the under-privileged against the uncertainties of life and the structural inequalities of society. The groups that grappled with the clear, specific needs of certain children and families were the most capable of co-operation. This requires the approval of different powers and the expertise of distinct professions as well as a forum to negotiate case-specific actions in professional confidentiality. The ideals of primary prevention for everyone and value discussions alone fail to inspire sufficient multiagency co-operation. The ideal of a network seems to give word and shape to those societal goals that are difficult or even impossible to reach, but are nevertheless yearned for: mutual understanding of the good life, close social relationships, mutual trust and active agency for all citizens. Individualisation, the multiplicity of life styles and the possibility to choose have come true in such a way that the very idea of a mutual and binding network can be attained only momentarily and between restricted participants. In conclusion, uniting professional networks that negotiate intervention dilemmas with citizen networks based on changing compassions and feelings of moral superiority seems impossible. Rather, one should encourage openness to scrutiny among tangential or contradicting groups, networks and communities. Key words: network policy, prevention of exclusion, multi-agency groups, young people

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To identify key stakeholder preferences and priorities when considering a national healthcare-associated infection (HAI) surveillance programme through the use of a discrete choice experiment (DCE). Setting: Australia does not have a national HAI surveillance programme. An online web-based DCE was developed and made available to participants in Australia. Participants: A sample of 184 purposively selected healthcare workers based on their senior leadership role in infection prevention in Australia. Primary and secondary outcomes: A DCE requiring respondents to select 1 HAI surveillance programme over another based on 5 different characteristics (or attributes) in repeated hypothetical scenarios. Data were analysed using a mixed logit model to evaluate preferences and identify the relative importance of each attribute. Results: A total of 122 participants completed the survey (response rate 66%) over a 5-week period. Excluding 22 who mismatched a duplicate choice scenario, analysis was conducted on 100 responses. The key findings included: 72% of stakeholders exhibited a preference for a surveillance programme with continuous mandatory core components (mean coefficient 0.640 (p<0.01)), 65% for a standard surveillance protocol where patient-level data are collected on infected and non-infected patients (mean coefficient 0.641 (p<0.01)), and 92% for hospital-level data that are publicly reported on a website and not associated with financial penalties (mean coefficient 1.663 (p<0.01)). Conclusions: The use of the DCE has provided a unique insight to key stakeholder priorities when considering a national HAI surveillance programme. The application of a DCE offers a meaningful method to explore and quantify preferences in this setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evaluation practices have pervaded the Finnish society and welfare state. At the same time the term effectiveness has become a powerful organising concept in welfare state activities. The aim of the study is to analyse how the outcome-oriented society came into being through historical processes, to answer the question of how social policy and welfare state practices were brought under the governance of the concept of effectiveness . Discussions about social imagination, Michel Foucault s conceptions of the history of the present and of governmentality, genealogy and archaeology, along with Ian Hacking s notions of dynamic nominalism and styles of reasoning, are used as the conceptual and methodological starting points for the study. In addition, Luc Boltanski s and Laurent Thévenot s ideas of orders of worth , regimes of evaluation in everyday life, are employed. Usually, evaluation is conceptualised as an autonomous epistemic culture and practice (evaluation as epistemic practice), but evaluation is here understood as knowledge-creation processes elementary to different epistemic practices (evaluation in epistemic practices). The emergence of epistemic cultures and styles of reasoning about the effectiveness or impacts of welfare state activities are analysed through Finnish social policy and social work research. The study uses case studies which represent debates and empirical research dealing with the effectiveness and quality of social services and social work. While uncertainty and doubts over the effects and consequences of welfare policies have always been present in discourses about social policy, the theme has not been acknowledged much in social policy research. To resolve these uncertainties, eight styles of reasoning about such effects have emerged over time. These are the statistical, goal-based, needs-based, experimental, interaction-based, performance measurement, auditing and evidence-based styles of reasoning. Social policy research has contributed in various ways to the creation of these epistemic practices. The transformation of the welfare state, starting at the end of 1980s, increased market-orientation and trimmed public welfare responsibilities, and led to the adoption of the New Public Management (NPM) style of leadership. Due to these developments the concept of effectiveness made a breakthrough, and new accountabilities with their knowledge tools for performance measurement and auditing and evidence-based styles of reasoning became more dominant in the ruling of the welfare state. Social sciences and evaluation have developed a heteronomous relation with each other, although there still remain divergent tendencies between them. Key words: evaluation, effectiveness, social policy, welfare state, public services, sociology of knowledge

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At the Tevatron, the total p_bar-p cross-section has been measured by CDF at 546 GeV and 1.8 TeV, and by E710/E811 at 1.8 TeV. The two results at 1.8 TeV disagree by 2.6 standard deviations, introducing big uncertainties into extrapolations to higher energies. At the LHC, the TOTEM collaboration is preparing to resolve the ambiguity by measuring the total p-p cross-section with a precision of about 1 %. Like at the Tevatron experiments, the luminosity-independent method based on the Optical Theorem will be used. The Tevatron experiments have also performed a vast range of studies about soft and hard diffractive events, partly with antiproton tagging by Roman Pots, partly with rapidity gap tagging. At the LHC, the combined CMS/TOTEM experiments will carry out their diffractive programme with an unprecedented rapidity coverage and Roman Pot spectrometers on both sides of the interaction point. The physics menu comprises detailed studies of soft diffractive differential cross-sections, diffractive structure functions, rapidity gap survival and exclusive central production by Double Pomeron Exchange.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At present the operating environment of sawmills in Europe is changing and there are uncertainties related in raw material supply in many countries. The changes in the operating environment of roundwood markets and the effects followed by these changes have brought up several interesting issues from the viewpoint of research. Lately new factors have been influencing the roundwood markets, such as increasing interest towards wood-based energy and implementation of new energy policies as well as changes in wood trade flows that affect the domestic markets in many countries. This Master’s thesis studies the adaptation ability of Finnish roundwood markets in a changing operating environment, aiming to produce an up-to-date analysis considering new development trends. The study concentrates on the roundwood markets from the viewpoint of sawmill industry since the industry is dependent on the functioning of the markets and sawmills are highly affected by the changes on the roundwood markets. To facilitate international comparison, the study is implemented by comparing Finnish and Austrian roundwood markets and analysing changes happening in the two countries. Finland and Austria share rather similar characteristics in the roundwood market structures, forest resources and forest ownership as well as production of roundwood and sawnwood. In addition they both are big exporters of forest industry products. In this study changes in the operating environment of sawmill industry both in Finland as well as in Austria are compared to each other aiming to recognise the main similarities and differences between the countries. In addition both development possibilities as well as challenges followed by the changes are discussed. The aim of the study is to define the main challenges and possibilities confronted by the actors on the markets and also to find new perspectives to approach these. The study is implemented as a qualitative study. The theoretical framework of the study describes the operating environment of wood markets from the viewpoint of the sawmill industry and represents the effects of supply and demand on the wood markets. The primary research material of the study was gathered by interviewing high level experts of forestry and sawmill industry in both Finland and Austria. The aim was to receive as extensive country specific viewpoint from the markets as possible, hence interviewees represented different parties of the markets. After creating country-specific profiles based on the theoretical framework a cross-country comparison was implemented. As a consequence the main similarities and differences in the operating environment and on the roundwood markets of Finland and Austria were recognized. In addition the main challenges and possibilites were identified. The results of the study offer a wide analysis regarding the main similarities and differences of the wood markets of Finland and Austria and their operating environments as well as concerning challenges and possibilities faced on the markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents the first measurement of the ratio of branching fractions B(Λb0→Λc+μ-ν̅ μ)/B(Λb0→Λc+π-). Measurements in two control samples using the same technique B(B̅ 0→D+μ-ν̅ μ)/B(B̅ 0→D+π-) and B(B̅ 0→D*(2010)+μ-ν̅ μ)/B(B̅ 0→D*(2010)+π-) are also reported. The analysis uses data from an integrated luminosity of approximately 172  pb-1 of pp̅ collisions at √s=1.96  TeV, collected with the CDF II detector at the Fermilab Tevatron. The relative branching fractions are measured to be B(Λb0→Λc+μ-ν̅ μ)/B(Λb0→Λc+π-)=16.6±3.0(stat)±1.0(syst)+2.6/-3.4(PDG)±0.3(EBR), B(B̅ 0→D+μ-ν̅ μ)/B(B̅ 0→D+π-)= 9.9±1.0(stat)±0.6(syst)±0.4(PDG)±0.5(EBR), and B(B̅ 0→D*(2010)+μ-ν̅ μ)/B(B̅ 0→D*(2010)+π-)=16.5±2.3(stat)± 0.6(syst)±0.5(PDG)±0.8(EBR). The uncertainties are from statistics (stat), internal systematics (syst), world averages of measurements published by the Particle Data Group or subsidiary measurements in this analysis (PDG), and unmeasured branching fractions estimated from theory (EBR), respectively. This article also presents measurements of the branching fractions of four new Λb0 semileptonic decays: Λb0→Λc(2595)+μ-ν̅ μ, Λb0→Λc(2625)+μ-ν̅ μ, Λb0→Σc(2455)0π+μ-ν̅ μ, and Λb0→Σc(2455)++π-μ-ν̅ μ, relative to the branching fraction of the Λb0→Λc+μ-ν̅ μ decay. Finally, the transverse-momentum distribution of Λb0 baryons produced in pp̅ collisions is measured and found to be significantly different from that of B̅ 0 mesons, which results in a modification in the production cross-section ratio σΛb0/σB̅ 0 with respect to the CDF I measurement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time,recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of thePD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impacts of climate change on hydrology are assessed by downscaling large scale general circulation model (GCM) outputs of climate variables to local scale hydrologic variables. This modelling approach is characterized by uncertainties resulting from the use of different models, different scenarios, etc. Modelling uncertainty in climate change impact assessment includes assigning weights to GCMs and scenarios, based on their performances, and providing weighted mean projection for the future. This projection is further used for water resources planning and adaptation to combat the adverse impacts of climate change. The present article summarizes the recent published work of the authors on uncertainty modelling and development of adaptation strategies to climate change for the Mahanadi river in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a measurement of the production cross section for b hadrons in pp̅ collisions at √s=1.96  TeV. Using a data sample derived from an integrated luminosity of 83  pb-1 collected with the upgraded Collider Detector (CDF II) at the Fermilab Tevatron, we analyze b hadrons, Hb, partially reconstructed in the semileptonic decay mode Hb→μ-D0X. Our measurement of the inclusive production cross section for b hadrons with transverse momentum pT>9  GeV/c and rapidity |y|<0.6 is σ=1.30  μb±0.05  μb(stat)±0.14  μb(syst)±0.07  μb(B), where the uncertainties are statistical, systematic, and from branching fractions, respectively. The differential cross sections dσ/dpT are found to be in good agreement with recent measurements of the Hb cross section and well described by fixed-order next-to-leading logarithm predictions.