923 resultados para semi-parametri model
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
In the UK, the recycling of sewage sludge to land is expected to double by 2006 but the security of this route is threatened by environmental concerns and health scares. Strategic investment is needed to ensure sustainable and secure sludge recycling outlets. At present, the security of this landbank for sludge recycling is determined by legislation relating to nutrient rather than potentially toxic elements (PTEs) applications to land - especially the environmental risk linked to soil phosphorus (P) saturation. We believe that not all land has an equal risk of contributing nutrients derived from applications to land to receiving waters. We are currently investigating whether it is possible to minimise nutrient loss by applying sludge to land outside Critical Source Areas (CSAs) regardless of soil P Index status. Research is underway to develop a predictive and spatially-sensitive, semi-distributed model of critical thresholds for sludge application that goes beyond traditional 'end-of-pipe" or "edge-of-field" modelling, to include hydrological flow paths and delivery mechanisms to receiving waters from non-point sources at the catchment scale.
Resumo:
A semi-distributed model, INCA, has been developed to determine the fate and distribution of nutrients in terrestrial and aquatic systems. The model simulates nitrogen and phosphorus processes in soils, groundwaters and river systems and can be applied in a semi-distributed manner at a range of scales. In this study, the model has been applied at field to sub-catchment to whole catchment scale to evaluate the behaviour of biosolid-derived losses of P in agricultural systems. It is shown that process-based models such as INCA, applied at a wide range of scales, reproduce field and catchment behaviour satisfactorily. The INCA model can also be used to generate generic information for risk assessment. By adjusting three key variables: biosolid application rates, the hydrological connectivity of the catchment and the initial P-status of the soils within the model, a matrix of P loss rates can be generated to evaluate the behaviour of the model and, hence, of the catchment system. The results, which indicate the sensitivity of the catchment to flow paths, to application rates and to initial soil conditions, have been incorporated into a Nutrient Export Risk Matrix (NERM).
Resumo:
Voluminous rhyolitic eruptions from Toba, Indonesia, and Taupo Volcanic Zone (TVZ), New Zealand, have dispersed volcanic ash over vast areas in the late Quaternary. The ~74 ka Youngest Toba Tuff (YTT) eruption deposited ash over the Bay of Bengal and the Indian subcontinent to the west. The ~340 ka Whakamaru eruption (TVZ) deposited the widespread Rangitawa Tephra, dominantly to the southeast (in addition to occurrences northwest of vent), extending across the landmass of New Zealand, and the South Pacific Ocean and Tasman Sea, with distal terrestrial exposures on the Chatham Islands. These super-eruptions involved ~2500 km^3 and ~1500 km3 of magma (dense-rock equivalent; DRE), respectively. Ultra-distal terrestrial exposures of YTT at two localities in India, Middle Son Valley, Madhya Pradesh, and Jurreru River Valley, Andhra Pradesh, at distances of >2000 km from the source caldera, show a basal ‘primary’ ashfall unit ~4 cm thick, although deposits containing reworked ash are up to ~3 m in total thickness. Exposures of Rangitawa Tephra on the Chatham Islands, >900 km from the source caldera, are ~15-30 cm thick. At more proximal localities (~200 km from source), Rangitawa Tephra is ~55-70 cm thick and characterized by a crystal-rich basal layer and normal grading. Both distal tephra deposits are characterized by very-fine ash (with high PM10 fractions) and are crystal-poor. Glass chemistry, stratigraphy and grain-size data for these distal tephra deposits are presented with comparisons of their correlation, dispersal and preservation. Using field observations, ash transport and deposition were modeled for both eruptions using a semi-analytical model (HAZMAP), with assumptions concerning average wind direction and strength during eruption, column shape and vent size. Model outputs provide new insights into eruption dynamics and better estimates of eruption volumes associ- ated with tephra fallout. Modeling based on observed YTT distal tephra thicknesses indicate a relatively low (<40 km high), very turbulent eruption column, consistent with deposition from a co-ignimbrite cloud extending over a broad region. Similarly, the Whakamaru eruption was modeled as producing a predominantly Plinian column (~45 km high), with dispersal to the southeast by strong prevailing winds. Significant ash fallout of the main dispersal direction, to the northwest of source, cannot be replicated in this modeling. The widespread dispersal of large volumes of fine ash from both eruptions may have had global environmental consequences, acutely affecting areas up to thousands of kilometers from vent.
Resumo:
We address the problem of automatically identifying and restoring damaged and contaminated images. We suggest a novel approach based on a semi-parametric model. This has two components, a parametric component describing known physical characteristics and a more flexible non-parametric component. The latter avoids the need for a detailed model for the sensor, which is often costly to produce and lacking in robustness. We assess our approach using an analysis of electroencephalographic images contaminated by eye-blink artefacts and highly damaged photographs contaminated by non-uniform lighting. These experiments show that our approach provides an effective solution to problems of this type.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
We study the stability regions and families of periodic orbits of two planets locked in a co-orbital configuration. We consider different ratios of planetary masses and orbital eccentricities; we also assume that both planets share the same orbital plane. Initially, we perform numerical simulations over a grid of osculating initial conditions to map the regions of stable/chaotic motion and identify equilibrium solutions. These results are later analysed in more detail using a semi-analytical model. Apart from the well-known quasi-satellite orbits and the classical equilibrium Lagrangian points L(4) and L(5), we also find a new regime of asymmetric periodic solutions. For low eccentricities these are located at (delta lambda, delta pi) = (+/- 60 degrees, -/+ 120 degrees), where delta lambda is the difference in mean longitudes and delta pi is the difference in longitudes of pericentre. The position of these anti-Lagrangian solutions changes with the mass ratio and the orbital eccentricities and are found for eccentricities as high as similar to 0.7. Finally, we also applied a slow mass variation to one of the planets and analysed its effect on an initially asymmetric periodic orbit. We found that the resonant solution is preserved as long as the mass variation is adiabatic, with practically no change in the equilibrium values of the angles.
Resumo:
This work investigates the impact of schooling Oil income distribution in statesjregions of Brazil. Using a semi-parametric model, discussed in DiNardo, Fortin & Lemieux (1996), we measure how much income diíferences between the Northeast and Southeast regions- the country's poorest and richest - and between the states of Ceará and São Paulo in those regions - can be explained by differences in schooling leveIs of the resident population. Using data from the National Household Survey (PNAD), we construct counterfactual densities by reweighting the distribution of the poorest region/state by the schooling profile of the richest. We conclude that: (i) more than 50% of the income di:fference is explained by the difference in schooling; (ii) the highest deciles of the income distribution gain more from an increase in schooling, closely approaching the wage distribution of the richest region/state; and (iii) an increase in schooling, holding the wage structure constant, aggravates the wage disparity in the poorest regions/ states.
Resumo:
Many of hydrocarbon reserves existing in the world are formed by heavy oils (°API between 10 and 20). Moreover, several heavy oil fields are mature and, thus, offer great challenges for oil industry. Among the thermal methods used to recover these resources, steamflooding has been the main economically viable alternative. Latent heat carried by steam heats the reservoir, reducing oil viscosity and facilitating the production. This method has many variations and has been studied both theoretically and experimentally (in pilot projects and in full field applications). In order to increase oil recovery and reduce steam injection costs, the injection of alternative fluid has been used on three main ways: alternately, co-injected with steam and after steam injection interruption. The main objective of these injection systems is to reduce the amount of heat supplied to the reservoir, using cheaper fluids and maintaining the same oil production levels. This works discusses the use of carbon dioxide, nitrogen, methane and water as an alternative fluid to the steam. The analyzed parameters were oil recoveries and net cumulative oil productions. The reservoir simulation model corresponds to an oil reservoir of 100 m x 100 m x 28 m size, on a Cartesian coordinates system (x, y and z directions). It is a semi synthetic model with some reservoir data similar to those found in Brazilian Potiguar Basin. All studied cases were done using the simulator STARS from CMG (Computer Modelling Group, version 2009.10). It was found that waterflood after steam injection interruption achieved the highest net cumulative oil compared to other fluids injection. Moreover, it was observed that steam and alternative fluids, co-injected and alternately, did not present increase on profitability project compared with steamflooding
Resumo:
The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Deals with some common problems in structural analysis when calculating the experimental semi-variogram and fitting a semi-variogram model. Geochemical data were used and the following cases were studied: regular versus irregular sampling grade, presence of 'outliers' values, skew distributions due to a high variability of the data and estimation using a kriging procedure. -from English summary
Resumo:
Neste trabalho, investigamos os efeitos da funcionalização de grupos oxidativos sobre
a estrutura de nanofitas de grafeno zigue-zague e também os efeitos de constrições, onde estes
efeitos foram analisados por meio de transporte eletrônico via campo externo longitudinal.
Nossos cálculos foram parametrizados pelo modelo semi-empírico de Huckel estendido-ETH,
adotando-se o método das funções de Green de não equilíbrio- NEGF. As correntes foram
calculadas via equação de Landauer que usa a função de transmissão da região espalhadora ao
fluxo de elétrons com energia (E) vinda do eletrodo esquerdo. Por meio dessa abordagem, foi
possível analisarmos o comportamento dos portadores de carga em cada um os dispositivos
propostos, bem como, a natureza de tal comportamento. Verificaram-se nas curvas I(V) dois
regimes de transporte: Ôhmico e NDR, verificando máximos de corrente e, também a tensão
de limiar (VTh1
Resumo:
We present a photometric catalogue of compact groups of galaxies (p2MCGs) automatically extracted from the Two-Micron All Sky Survey (2MASS) extended source catalogue. A total of 262 p2MCGs are identified, following the criteria defined by Hickson, of which 230 survive visual inspection (given occasional galaxy fragmentation and blends in the 2MASS parent catalogue). Only one quarter of these 230 groups were previously known compact groups (CGs). Among the 144 p2MCGs that have all their galaxies with known redshifts, 85 (59?per cent) have four or more accordant galaxies. This v2MCG sample of velocity-filtered p2MCGs constitutes the largest sample of CGs (with N = 4) catalogued to date, with both well-defined selection criteria and velocity filtering, and is the first CG sample selected by stellar mass. It is fairly complete up to Kgroup similar to 9 and radial velocity of similar to 6000?km?s-1. We compared the properties of the 78 v2MCGs with median velocities greater than 3000?km?s-1 with the properties of other CG samples, as well as those (mvCGs) extracted from the semi-analytical model (SAM) of Guo et al. run on the high-resolution Millennium-II simulation. This mvCG sample is similar (i.e. with 2/3 of physically dense CGs) to those we had previously extracted on three other SAMs run on the Millennium simulation with 125 times worse spatial and mass resolutions. The space density of v2MCGs within 6000?km?s-1 is 8.0 X 10-5?h3?Mpc-3, i.e. four times that of the Hickson sample [Hickson Compact Group (HCG)] up to the same distance and with the same criteria used in this work, but still 40?per cent less than that of mvCGs. The v2MCG constitutes the first group catalogue to show a statistically large firstsecond ranked galaxy magnitude gap according to TremaineRichstone statistics, as expected if the first ranked group members tend to be the products of galaxy mergers, and as confirmed in the mvCGs. The v2MCG is also the first observed sample to show that first-ranked galaxies tend to be centrally located, again consistent with the predictions obtained from mvCGs. We found no significant correlation of group apparent elongation and velocity dispersion in the quartets among the v2MCGs, and the velocity dispersions of apparently round quartets are not significantly larger than those of chain-like ones, in contrast to what has been previously reported in HCGs. By virtue of its automatic selection with the popular Hickson criteria, its size, its selection on stellar mass, and its statistical signs of mergers and centrally located brightest galaxies, the v2MCG catalogue appears to be the laboratory of choice to study physically dense groups of four or more galaxies of comparable luminosity.
Resumo:
When compared to our Solar System, many exoplanet systems exhibit quite unusual planet configurations; some of these are hot Jupiters, which orbit their central stars with periods of a few days, others are resonant systems composed of two or more planets with commensurable orbital periods. It has been suggested that these configurations can be the result of a migration processes originated by tidal interactions of the planets with disks and central stars. The process known as planet migration occurs due to dissipative forces which affect the planetary semi-major axes and cause the planets to move towards to, or away from, the central star. In this talk, we present possible signatures of planet migration in the distribution of the hot Jupiters and resonant exoplanet pairs. For this task, we develop a semi-analytical model to describe the evolution of the migrating planetary pair, based on the fundamental concepts of conservative and dissipative dynamics of the three-body problem. Our approach is based on an analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces needs to be invoked. We show that, under assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the stationary solutions of the conservative problem (Birkhoff, Dynamical systems, 1966). The ultimate convergence and the evolution of the system along one of these modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the roper frequencies of the system. We show that it is possible to reassemble the starting configurations and migration history of the systems on the basis of their final states, and consequently to constrain the parameters of the physical processes involved.