927 resultados para distributions to shareholders
Resumo:
Native bees are important providers of pollination services, but there are cumulative evidences of their decline. Global changes such as habitat losses, invasions of exotic species and climate change have been suggested as the main causes of the decline of pollinators. In this study, the influence of climate change on the distribution of 10 species of Brazilian bees was estimated with species distribution modelling. We used Maxent algorithm (maximum entropy) and two different scenarios, an optimistic and a pessimistic, to the years 2050 and 2080. We also evaluated the percentage reduction of species habitat based on the future scenarios of climate change through Geographic Information System (GIS). Results showed that the total area of suitable habitats decreased for all species but one under the different future scenarios. The greatest reductions in habitat area were found for Melipona bicolor bicolor and Melipona scutellaris, which occur predominantly in areas related originally to Atlantic Moist Forest. The species analysed have been reported to be pollinators of some regional crops and the consequence of their decrease for these crops needs further clarification. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper considers likelihood-based inference for the family of power distributions. Widely applicable results are presented which can be used to conduct inference for all three parameters of the general location-scale extension of the family. More specific results are given for the special case of the power normal model. The analysis of a large data set, formed from density measurements for a certain type of pollen, illustrates the application of the family and the results for likelihood-based inference. Throughout, comparisons are made with analogous results for the direct parametrisation of the skew-normal distribution.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.
Resumo:
Objective. To test the hypothesis that the difference in the coefficient of thermal contraction of the veneering porcelain above (˛liquid) and below (˛solid) its Tg plays an important role in stress development during a fast cooling protocol of Y-TZP crowns. Methods. Three-dimensional finite element models of veneered Y-TZP crowns were developed. Heat transfer analyses were conducted with two cooling protocols: slow (group A) and fast (groups B–F). Calculated temperatures as a function of time were used to determine the thermal stresses. Porcelain ˛solid was kept constant while its ˛liquid was varied, creating different ˛/˛solid conditions: 0, 1, 1.5, 2 and 3 (groups B–F, respectively). Maximum ( 1) and minimum ( 3) residual principal stress distributions in the porcelain layer were compared. Results. For the slowly cooled crown, positive 1 were observed in the porcelain, orientated perpendicular to the core–veneer interface (“radial” orientation). Simultaneously, negative 3 were observed within the porcelain, mostly in a hoop orientation (“hoop–arch”). For rapidly cooled crowns, stress patterns varied depending on ˛/˛solid ratios. For groups B and C, the patterns were similar to those found in group A for 1 (“radial”) and 3 (“hoop–arch”). For groups D–F, stress distribution changed significantly, with 1 forming a “hoop-arch” pattern while 3 developed a “radial” pattern. Significance. Hoop tensile stresses generated in the veneering layer during fast cooling protocols due to porcelain high ˛/˛solid ratio will facilitate flaw propagation from the surface toward the core, which negatively affects the potential clinical longevity of a crown.
Resumo:
The objective of this thesis is to improve the understanding of what processes and mechanism affects the distribution of polychlorinated biphenyls (PCBs) and organic carbon in coastal sediments. Because of the strong association of hydrophobic organic contaminants (HOCs) such as PCBs with organic matter in the aquatic environment, these two entities are naturally linked. The coastal environment is the most complex and dynamic part of the ocean when it comes to both cycling of organic matter and HOCs. This environment is characterised by the largest fluxes and most diverse sources of both entities. A wide array of methods was used to study these processes throughout this thesis. In the field sites in the Stockholm archipelago of the Baltic proper, bottom sediments and settling particulate matter were retrieved using sediment coring devices and sediment traps from morphometrically and seismically well-characterized locations. In the laboratory, the samples have been analysed for PCBs, stable carbon isotope ratios, carbon-nitrogen atom ratios as well as standard sediment properties. From the fieldwork in the Stockholm Archipelago and the following laboratory work it was concluded that the inner Stockholm archipelago has a low (≈ 4%) trapping efficiency for freshwater-derived organic carbon. The corollary is a large potential for long-range waterborne transport of OC and OC-associated nutrients and hydrophobic organic pollutants from urban Stockholm to more pristine offshore Baltic Sea ecosystems. Theoretical work has been carried out using Geographical Information Systems (GIS) and statistical methods on a database of 4214 individual sediment samples, each with reported individual PCB congener concentrations. From this work it was concluded that the continental shelf sediments are key global inventories and ultimate sinks of PCBs. Depending on congener, 10-80% of the cumulative historical emissions to the environment are accounted for in continental shelf sediments. Further it was concluded that the many infamous and highly contaminated surface sediments of urban harbours and estuaries of contaminated rivers cannot be of importance as a secondary source to sustain the concentrations observed in remote sediments. Of the global shelf PCB inventory < 1% are in sediments near population centres while ≥ 90% is in remote areas (> 10 km from any dwellings). The remote sub-basin of the North Atlantic Ocean contains approximately half of the global shelf sediment inventory for most of the PCBs studied.
Resumo:
[EN]Labile Fe(II) distributions were investigated in the Sub-Tropical South Atlantic and the Southern Ocean during the BONUS-GoodHope cruise from 34 to 57_ S (February? March 2008). Concentrations ranged from below the detection limit (0.009 nM) to values as high 5 as 0.125 nM. In the surface mixed layer, labile Fe(II) concentrations were always higher than the detection limit, with values higher than 0.060nM south of 47_ S, representing between 39% and 63% of dissolved Fe (DFe). Biological production was evidenced. At intermediate depth, local maxima were observed, with the highest values in the Sub-Tropical domain at around 200 m, and represented more than 70% of DFe. Remineralization processes were likely responsible for those sub-surface maxima. Below 1500 m, concentrations were close to or below the detection limit, except at two stations (at the vicinity of the Agulhas ridge and in the north of the Weddell Sea Gyre) where values remained as high as _0.030?0.050 nM. Hydrothermal or sediment inputs may provide Fe(II) to these deep waters. Fe(II) half life times (t1/2) at 4 _C were measured in the upper and deep waters and ranged from 2.9 to 11.3min, and from 10.0 to 72.3 min, respectively. Measured values compared quite well in the upper waters with theoretical values from two published models, but not in the deep waters. This may be due to the lack of knowledge for some parameters in the models and/or to organic complexation of Fe(II) that impact its oxidation rates. This study helped to considerably increase the Fe(II) data set in the Ocean and to better understand the Fe redox cycle.
Resumo:
[EN] An optimum multiparameter analysis was applied to a data set for the eastern boundary of the North Atlantic subtropical gyre, gathered during November of two consecutive years and spanning from 16 to 36º N. This data set covers over 20º of latitude with good meridional and zonal resolution over the whole coastal transition zone. The contribution from six water types in the depth range between 100 and 2000 m is solved. In the 100 to 700 m depth range the central waters of southern and northern origin meet abruptly at the Cape Verde Frontal Zone. This front traditionally has been reported to stretch from Cape Blanc, at about 21.5º N, to the Cape Verde Islands, but in our case it penetrates as far as 24º N over the continental slope. South of 21º N latitude we actually find a less saline and more oxygenated variety of South Atlantic Central Water, which we ascribe to less diluted equatorial waters. In the 700 to 1500 m depth range the dominant water type is a diluted form of Antarctic Intermediate Water (AAIW), whose influence smoothly disappears north of the Canary Islands as it is replaced by Mediterranean Water (MW); at latitudes where both water masses coexist, we observe MW offshore while AAIW is found near-shore. North Atlantic Deep Water is the dominating water type below about 1300/1700 m depth south/north of the Canary Islands; this abrupt change in depth suggests the existence of different paths for the deep waters reaching both sides of the archipelago.
Resumo:
[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...
Resumo:
High spectral resolution radiative transfer (RT) codes are essential tools in the study of the radiative energy transfer in the Earth atmosphere and a support for the development of parameterizations for fast RT codes used in climate and weather prediction models. Cirrus clouds cover permanently 30% of the Earth's surface, representing an important contribution to the Earth-atmosphere radiation balance. The work has been focussed on the development of the RT model LBLMS. The model, widely tested in the infra-red spectral range, has been extended to the short wave spectrum and it has been used in comparison with airborne and satellite measurements to study the optical properties of cirrus clouds. A new database of single scattering properties has been developed for mid latitude cirrus clouds. Ice clouds are treated as a mixture of ice crystals with various habits. The optical properties of the mixture are tested in comparison to radiometric measurements in selected case studies. Finally, a parameterization of the mixture for application to weather prediction and global circulation models has been developed. The bulk optical properties of ice crystals are parameterized as functions of the effective dimension of measured particle size distributions that are representative of mid latitude cirrus clouds. Tests with the Limited Area Weather Prediction model COSMO have shown the impact of the new parameterization with respect to cirrus cloud optical properties based on ice spheres.
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
The proton-nucleus elastic scattering at intermediate energies is a well-established method for the investigation of the nuclear matter distribution in stable nuclei and was recently applied also for the investigation of radioactive nuclei using the method of inverse kinematics. In the current experiment, the differential cross sections for proton elastic scattering on the isotopes $^{7,9,10,11,12,14}$Be and $^8$B were measured. The experiment was performed using the fragment separator at GSI, Darmstadt to produce the radioactive beams. The main part of the experimental setup was the time projection ionization chamber IKAR which was simultaneously used as hydrogen target and a detector for the recoil protons. Auxiliary detectors for projectile tracking and isotope identification were also installed. As results from the experiment, the absolute differential cross sections d$sigma$/d$t$ as a function of the four momentum transfer $t$ were obtained. In this work the differential cross sections for elastic p-$^{12}$Be, p-$^{14}$Be and p-$^{8}$B scattering at low $t$ ($t leq$~0.05~(GeV/c)$^2$) are presented. The measured cross sections were analyzed within the Glauber multiple-scattering theory using different density parameterizations, and the nuclear matter density distributions and radii of the investigated isotopes were determined. The analysis of the differential cross section for the isotope $^{14}$Be shows that a good description of the experimental data is obtained when density distributions consisting of separate core and halo components are used. The determined {it rms} matter radius is $3.11 pm 0.04 pm 0.13$~fm. In the case of the $^{12}$Be nucleus the results showed an extended matter distribution as well. For this nucleus a matter radius of $2.82 pm 0.03 pm 0.12$~fm was determined. An interesting result is that the free $^{12}$Be nucleus behaves differently from the core of $^{14}$Be and is much more extended than it. The data were also compared with theoretical densities calculated within the FMD and the few-body models. In the case of $^{14}$Be, the calculated cross sections describe the experimental data well while, in the case of $^{12}$Be there are discrepancies in the region of high momentum transfer. Preliminary experimental results for the isotope $^8$B are also presented. An extended matter distribution was obtained (though much more compact as compared to the neutron halos). A proton halo structure was observed for the first time with the proton elastic scattering method. The deduced matter radius is $2.60pm 0.02pm 0.26$~fm. The data were compared with microscopic calculations in the frame of the FMD model and reasonable agreement was observed. The results obtained in the present analysis are in most cases consistent with the previous experimental studies of the same isotopes with different experimental methods (total interaction and reaction cross section measurements, momentum distribution measurements). For future investigation of the structure of exotic nuclei a universal detector system EXL is being developed. It will be installed at the NESR at the future FAIR facility where higher intensity beams of radioactive ions are expected. The usage of storage ring techniques provides high luminosity and low background experimental conditions. Results from the feasibility studies of the EXL detector setup, performed at the present ESR storage ring, are presented.
Resumo:
We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
In this thesis we investigate the phenomenology of supersymmetric particles at hadron colliders beyond next-to-leading order (NLO) in perturbation theory. We discuss the foundations of Soft-Collinear Effective Theory (SCET) and, in particular, we explicitly construct the SCET Lagrangian for QCD. As an example, we discuss factorization and resummation for the Drell-Yan process in SCET. We use techniques from SCET to improve existing calculations of the production cross sections for slepton-pair production and top-squark-pair production at hadron colliders. As a first application, we implement soft-gluon resummation at next-to-next-to-next-to-leading logarithmic order (NNNLL) for slepton-pair production in the minimal supersymmetric extension of the Standard Model (MSSM). This approach resums large logarithmic corrections arising from the dynamical enhancement of the partonic threshold region caused by steeply falling parton luminosities. We evaluate the resummed invariant-mass distribution and total cross section for slepton-pair production at the Tevatron and LHC and we match these results, in the threshold region, onto NLO fixed-order calculations. As a second application we present the most precise predictions available for top-squark-pair production total cross sections at the LHC. These results are based on approximate NNLO formulas in fixed-order perturbation theory, which completely determine the coefficients multiplying the singular plus distributions. The analysis of the threshold region is carried out in pair invariant mass (PIM) kinematics and in single-particle inclusive (1PI) kinematics. We then match our results in the threshold region onto the exact fixed-order NLO results and perform a detailed numerical analysis of the total cross section.