982 resultados para Statistics of extremes
Resumo:
During the last few years, the discussion on the marginal social costs of transportation has been active. Applying the externalities as a tool to control transport would fulfil the polluter pays principle and simultaneously create a fair control method between the transport modes. This report presents the results of two calculation algorithms developed to estimate the marginal social costs based on the externalities of air pollution. The first algorithm calculates the future scenarios of sea transport traffic externalities until 2015 in the Gulf of Finland. The second algorithm calculates the externalities of Russian passenger car transit traffic via Finland by taking into account both sea and road transport. The algorithm estimates the ship-originated emissions of carbon dioxide (CO2), nitrogen oxides (NOx), sulphur oxides (SOx), particulates (PM) and the externalities for each year from 2007 to 2015. The total NOx emissions in the Gulf of Finland from the six ship types were almost 75.7 kilotons (Table 5.2) in 2007. The ship types are: passenger (including cruisers and ROPAX vessels), tanker, general cargo, Ro-Ro, container and bulk vessels. Due to the increase of traffic, the estimation for NOx emissions for 2015 is 112 kilotons. The NOx emission estimation for the whole Baltic Sea shipping is 370 kilotons in 2006 (Stipa & al, 2007). The total marginal social costs due to ship-originated CO2, NOx, SOx and PM emissions in the GOF were calculated to almost 175 million Euros in 2007. The costs will increase to nearly 214 million Euros in 2015 due to the traffic growth. The major part of the externalities is due to CO2 emissions. If we neglect the CO2 emissions by extracting the CO2 externalities from the results, we get the total externalities of 57 million Euros in 2007. After eight years (2015), the externalities would be 28 % lower, 41 million Euros (Table 8.1). This is the result of the sulphur emissions reducing regulation of marine fuels. The majority of the new car transit goes through Finland to Russia due to the lack of port capacity in Russia. The amount of cars was 339 620 vehicles (Statistics of Finnish Customs 2008) in 2005. The externalities are calculated for the transportation of passenger vehicles as follows: by ship to a Finnish port and, after that, by trucks to the Russian border checkpoint. The externalities are between 2 – 3 million Euros (year 2000 cost level) for each route. The ports included in the calculations are Hamina, Hanko, Kotka and Turku. With the Euro-3 standard trucks, the port of Hanko would be the best choice to transport the vehicles. This is because of lower emissions by new trucks and the saved transport distance of a ship. If the trucks are more polluting Euro 1 level trucks, the port of Kotka would be the best choice. This indicates that the truck emissions have a considerable effect on the externalities and that the transportation of light cargo, such as passenger cars by ship, produces considerably high emission externalities. The emission externalities approach offers a new insight for valuing the multiple traffic modes. However, the calculation of the marginal social costs based on the air emission externalities should not be regarded as a ready-made calculation system. The system is clearly in the need of some improvement but it can already be considered as a potential tool for political decision making.
Resumo:
The reduction of quantum scattering leads to the suppression of shot noise. In this Letter, we analyze the crossover from the quantum transport regime with universal shot noise to the classical regime where noise vanishes. By making use of the stochastic path integral approach, we find the statistics of transport and the transmission properties of a chaotic cavity as a function of a system parameter controlling the crossover. We identify three different scenarios of the crossover.
Resumo:
The objective of the pilotage effectiveness study was to come up with a process descrip-tion of the pilotage procedure, to design performance indicators based on this process description, to be used by Finnpilot, and to work out a preliminary plan for the imple-mentation of the indicators within the Finnpilot organisation. The theoretical aspects of pilotage as well as the guidelines and standards used were determined through a literature review. Based on the literature review, a process flow model with the following phases was created: the planning of pilotage, the start of pilo-tage, the act of pilotage, the end of pilotage and the closing of pilotage. The model based on the literature review was tested through interviews and observation of pilotage. At the same time an e-mail survey directed at foreign pilotage organisations, which included a questionnaire concerning their standards and management systems, operations procedures, measurement tools and their attitude to the passage planning, was conducted. The main issues in the observations and interviews were the passage plan and the bridge team co-operation. The phases of the pilotage process model emerged in both the pilotage activities and the interviews whereas bridge team co-operation was relatively marginal. Most of the pilotage organisations, who responded to the query, also use some standard-based management system. All organisations who answered the survey use some sort of a pilotage process model. According to the query, the main measuring tools for pilotage are statistical information concerning pilotage and the organisations, the customer feedback surveys, and financial results. Attitudes to-wards passage planning were mostly positive among the organisations. A workshop with pilotage experts was arranged where the process model constructed on the basis of the literature review was tuned to match practical pilotage. In the workshop it was determined that certain phases and the corresponding tasks, through which pilo-tage can be described as a process, were identifiable in all pilotage. The result of the workshop was a complemented process model, which separates incoming and outgoing traffic, as well as the fairway pilotage and harbour pilotage from each other. Addition-ally indicators divided according to the data gathering method were defined. Data con-cerning safety and traffic flow is gathered in the form of customer feedback. The pilot's own perceptions of the pilotage process are gathered through self-assessment. The measurement data which is connected to the phases of the pilotage process is generated e.g. by gathering statistics of the success of the pilot dispatches, the accuracy of the pi-lotage and the incidents that occurred during the pilotage, near misses, deviations and accidents. The measurement data is collected via the PilotWeb at the closing of the pilo-tage. A separate project and a project group with pilots also participating will be established for the deployment of the performance indicators. The phases of the project are: the definition phase, the implementation phase and the deployment phase. The purpose of the definition phase is to prepare questions for ship commanders concerning the cus-tomer feedback questionnaire and also to work out the self-assessment queries and the queries concerning the process indicators.
Resumo:
This research aims at studying spatial autocorrelation of Landsat/TM based on normalized difference vegetation index (NDVI) and green vegetation index (GVI) of soybean of the western region of the State of Paraná. The images were collected during the 2004/2005 crop season. The data were grouped into five vegetation index classes of equal amplitude, to create a temporal map of soybean within the crop cycle. Moran I and Local Indicators of Spatial Autocorrelation (LISA) indices were applied to study the spatial correlation at the global and local levels, respectively. According to these indices, it was possible to understand the municipality-based profiles of tillage as well as to identify different sowing periods, providing important information to producers who use soybean yield data in their planning.
Resumo:
The purpose of this research was to provide a deeper insight into the consequences of electronic human resource management (e-HRM) for line managers. The consequences are viewed as used information system (IS) potentials pertaining to the moderate voluntaristic category of consequences. Due to the need to contextualize the research and draw on line managers’ personal experiences, a qualitative approach in a case study setting was selected. The empirical part of the research is loosely based on literature on HRM and e-HRM and it was conducted in an industrial private sector company. In this thesis, method triangulation was utilized, as nine semi-structured interviews, conducted in a European setting, created the main method for data collection and analysis. Other complementary data such as HRM documentation and statistics of e-HRM system usage were utilized as background information to help to put the results into context. E-HRM has partly been taken into use in the case study company. Line managers tend to use e-HRM when a particular task requires it, but they are not familiar with all the features and possibilities which e-HRM has to offer. The advantages of e-HRM are in line with the company’s goals. The advantages are e.g. an transparency of data, process consistency, and having an efficient and easy-to-use tool at one’s disposal. However, several unintended, even contradictory, and mainly negative outcomes can also be identified, such as over-complicated processes, in-security in use of the tool, and the lack of co-operation with HR professionals. The use of e-HRM and managers’ perceptions regarding e-HRM affect the way in which managers perceive the consequences of e-HRM on their work. Overall, the consequences of e-HRM are divergent, even contradictory. The managers who considered e-HRM mostly beneficial to their work found that e-HRM affects their work by providing information and increasing efficiency. Those managers who mostly perceived challenges in e-HRM did not think that e-HRM had affected their role or their work. Even though the perceptions regarding e-HRM and its consequences might reflect the strategies, the distribution of work, and the ways of working in all HRM in general and can’t be generalized as such, this research contributed to the field of e-HRM and it provides new perspectives to e-HRM in the case study organization and in the academic field in general.
Resumo:
Surficial sediments east of Dunnville, Ontario representing a limited deltaic/lacustrine/aeolian system are investigated with the aim of defining and interpreting their geological history by means of exarrrrning their sedimentology and interrelationships. The Folk and \oJard grain size statistics of samples fran the area were calculated. These sample parameters were e1en plotted on maps to detennine regional patterns. The strongest pattern observed was one of distinct fining to the east, away fran the sand source. Aeolian deposits were fourrl to be better sorted than the surrcunding sediments. The grain size parameter values were also plotted on bivariate graphs in an attempt to separate the samples according to depositional environment. This exercise met with little success, as rrost of the sediments sampled in the area have similar grain size parameters. This is believed to be because the sediment sources for the different environments (delta, distal delta, aeolian dune) are intimately related, to the point that nnst dunes appear to have been sourcErl fran immediately local sediments. It is FOstulated that in such a srrall sedimentological sub-system, sediments were not involved in active transport for a length of time sufficient for the rraterial to cane to equilibritnn with its transporting medium. Thus, fe..v distinctive patterns of parameters were developed that would enable one to differentiate between various environments of neposition. The i.rnTaturity of rrany dune forms and the i.Imaturity of mineralogical composition of all deposits support the above hyt:XJthesis of limited transport time. Another hypothesis proposen is that eadh geologically or geographically distinct area or "sub-system" rray have its o,.m "signature" of grain size relationships as plotted on bivariate graphs. Thus, the emphasis, concerning graphs of this type, should not be placErl on attempting to nifferentiate between various environnents of deposition, hut raB1er on investigating the interrelationships between sanples am environments within that "sub-system". Through the course of this investigation, the existence of nelta plain distributary Channels in the thesis area is SUG0ested, and the mscovery of significantly mfferent sub-units within the TUnnville dune sediments is documented. It is inferred by reference to other authors interpretations of the glacial history of the area, that the tirre of effective aeolian acti vi ty in the Dunnville area was between 12,300 to 12,100 years R.p.
Resumo:
Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.
Resumo:
This study describes a combined empirical/modeling approach to assess the possible impact of climate variability on rice production in the Philippines. We collated climate data of the last two decades (1985-2002) as well as yield statistics of six provinces of the Philippines, selected along a North-South gradient. Data from the climate information system of NASA were used as input parameters of the model ORYZA2000 to determine potential yields and, in the next steps, the yield gaps defined as the difference between potential and actual yields. Both simulated and actual yields of irrigated rice varied strongly between years. However, no climate-driven trends were apparent and the variability in actual yields showed no correlation with climatic parameters. The observed variation in simulated yields was attributable to seasonal variations in climate (dry/wet season) and to climatic differences between provinces and agro-ecological zones. The actual yield variation between provinces was not related to differences in the climatic yield potential but rather to soil and management factors. The resulting yield gap was largest in remote and infrastructurally disfavored provinces (low external input use) with a high production potential (high solar radiation and day-night temperature differences). In turn, the yield gap was lowest in central provinces with good market access but with a relatively low climatic yield potential. We conclude that neither long-term trends nor the variability of the climate can explain current rice yield trends and that agroecological, seasonal, and management effects are over-riding any possible climatic variations. On the other hand the lack of a climate-driven trend in the present situation may be superseded by ongoing climate change in the future.
Resumo:
Pocos estudios han evaluado el tratamiento de las fracturas desplazadas de cuello femoral en pacientes menores de 65 años de edad, y no han sido claramente definidos los factores de riesgo para necrosis avascular o no-unión dentro de este rango de edad. Para determinar los factores asociados a la necrosis avascular de la cabeza femoral (AVN) y no-unión en pacientes menores de 65 años de edad con fracturas desplazadas del cuello femoral tratados con reducción y fijación interna, se realizó un estudio retrospectivo de 29 fracturas desplazadas del cuello femoral en 29 pacientes consecutivos tratados en una sola institución. La influencia de la edad, la energía del trauma, tipo de reducción, y el tiempo entre la fractura y el tratamiento en desarrollo de la AVN y no-unión fueron evaluados. Los pacientes que desarrollaron NAV fueron significativamente mayores y sufrieron un trauma de más baja energía que en los casos sin AVN. Ninguna variable fue asociada con la no-unión. La regresión logística determinó que sólo la edad se asoció de forma independiente a NAV. La edad es un buen predictor para el desarrollo de NAV, con un C-estadístico de 0.861, y un mejor corte-determinado en 53,5 años. Conclusión: Los pacientes de entre 53,5 y 65 años presentan un riesgo más alto de NAV. La artroplastia primaria se debe considerar en este subgrupo.
Resumo:
La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, càlcul de propietats efectives i criteris de fallada transversal en materials compòsits. El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen. Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents. Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecàniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.
Resumo:
This study examined oral education components that could be successfully implemented with culturally and linguistically diverse deaf and hard of hearing (DHH) children and their families. A literature review of oral program strategies used with culturally diverse families and their children with special needs, and federal guidelines related to programs serving DHH children were conducted. Recent statistics of children in programs for DHH students who are from racially and linguistically diverse backgrounds were discussed. Additional data sources included classroom observations and multidisciplinary interviews. The data obtained was utilized to design a framework for oral programs to support culturally and linguistically diverse DHH children and their families.
Resumo:
A new technique is described for the analysis of cloud-resolving model simulations, which allows one to investigate the statistics of the lifecycles of cumulus clouds. Clouds are tracked from timestep-to-timestep within the model run. This allows for a very simple method of tracking, but one which is both comprehensive and robust. An approach for handling cloud splits and mergers is described which allows clouds with simple and complicated time histories to be compared within a single framework. This is found to be important for the analysis of an idealized simulation of radiative-convective equilibrium, in which the moist, buoyant, updrafts (i.e., the convective cores) were tracked. Around half of all such cores were subject to splits and mergers during their lifecycles. For cores without any such events, the average lifetime is 30min, but events can lengthen the typical lifetime considerably.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
The North Pacific and Bering Sea regions represent loci of cyclogenesis and storm track activity. In this paper climatological properties of extratropical storms in the North Pacific/Bering Sea are presented based upon aggregate statistics of individual storm tracks calculated by means of a feature-tracking algorithm run using NCEP–NCAR reanalysis data from 1948/49 to 2008, provided by the NOAA/Earth System Research Laboratory and the Cooperative Institute for Research in Environmental Sciences, Climate Diagnostics Center. Storm identification is based on the 850-hPa relative vorticity field (ζ) instead of the often-used mean sea level pressure; ζ is a prognostic field, a good indicator of synoptic-scale dynamics, and is directly related to the wind speed. Emphasis extends beyond winter to provide detailed consideration of all seasons. Results show that the interseasonal variability is not as large during the spring and autumn seasons. Most of the storm variables—genesis, intensity, track density—exhibited a maxima pattern that was oriented along a zonal axis. From season to season this axis underwent a north–south shift and, in some cases, a rotation to the northeast. This was determined to be a result of zonal heating variations and midtropospheric moisture patterns. Barotropic processes have an influence in shaping the downstream end of storm tracks and, together with the blocking influence of the coastal orography of northwest North America, result in high lysis concentrations, effectively making the Gulf of Alaska the “graveyard” of Pacific storms. Summer storms tended to be longest in duration. Temporal trends tended to be weak over the study area. SST did not emerge as a major cyclogenesis control in the Gulf of Alaska.
Resumo:
An extensive statistical ‘downscaling’ study is done to relate large-scale climate information from a general circulation model (GCM) to local-scale river flows in SW France for 51 gauging stations ranging from nival (snow-dominated) to pluvial (rainfall-dominated) river-systems. This study helps to select the appropriate statistical method at a given spatial and temporal scale to downscale hydrology for future climate change impact assessment of hydrological resources. The four proposed statistical downscaling models use large-scale predictors (derived from climate model outputs or reanalysis data) that characterize precipitation and evaporation processes in the hydrological cycle to estimate summary flow statistics. The four statistical models used are generalized linear (GLM) and additive (GAM) models, aggregated boosted trees (ABT) and multi-layer perceptron neural networks (ANN). These four models were each applied at two different spatial scales, namely at that of a single flow-gauging station (local downscaling) and that of a group of flow-gauging stations having the same hydrological behaviour (regional downscaling). For each statistical model and each spatial resolution, three temporal resolutions were considered, namely the daily mean flows, the summary statistics of fortnightly flows and a daily ‘integrated approach’. The results show that flow sensitivity to atmospheric factors is significantly different between nival and pluvial hydrological systems which are mainly influenced, respectively, by shortwave solar radiations and atmospheric temperature. The non-linear models (i.e. GAM, ABT and ANN) performed better than the linear GLM when simulating fortnightly flow percentiles. The aggregated boosted trees method showed higher and less variable R2 values to downscale the hydrological variability in both nival and pluvial regimes. Based on GCM cnrm-cm3 and scenarios A2 and A1B, future relative changes of fortnightly median flows were projected based on the regional downscaling approach. The results suggest a global decrease of flow in both pluvial and nival regimes, especially in spring, summer and autumn, whatever the considered scenario. The discussion considers the performance of each statistical method for downscaling flow at different spatial and temporal scales as well as the relationship between atmospheric processes and flow variability.