853 resultados para Small Area Estimation
Resumo:
Le modèle GARCH à changement de régimes est le fondement de cette thèse. Ce modèle offre de riches dynamiques pour modéliser les données financières en combinant une structure GARCH avec des paramètres qui varient dans le temps. Cette flexibilité donne malheureusement lieu à un problème de path dependence, qui a empêché l'estimation du modèle par le maximum de vraisemblance depuis son introduction, il y a déjà près de 20 ans. La première moitié de cette thèse procure une solution à ce problème en développant deux méthodologies permettant de calculer l'estimateur du maximum de vraisemblance du modèle GARCH à changement de régimes. La première technique d'estimation proposée est basée sur l'algorithme Monte Carlo EM et sur l'échantillonnage préférentiel, tandis que la deuxième consiste en la généralisation des approximations du modèle introduites dans les deux dernières décennies, connues sous le nom de collapsing procedures. Cette généralisation permet d'établir un lien méthodologique entre ces approximations et le filtre particulaire. La découverte de cette relation est importante, car elle permet de justifier la validité de l'approche dite par collapsing pour estimer le modèle GARCH à changement de régimes. La deuxième moitié de cette thèse tire sa motivation de la crise financière de la fin des années 2000 pendant laquelle une mauvaise évaluation des risques au sein de plusieurs compagnies financières a entraîné de nombreux échecs institutionnels. À l'aide d'un large éventail de 78 modèles économétriques, dont plusieurs généralisations du modèle GARCH à changement de régimes, il est démontré que le risque de modèle joue un rôle très important dans l'évaluation et la gestion du risque d'investissement à long terme dans le cadre des fonds distincts. Bien que la littérature financière a dévoué beaucoup de recherche pour faire progresser les modèles économétriques dans le but d'améliorer la tarification et la couverture des produits financiers, les approches permettant de mesurer l'efficacité d'une stratégie de couverture dynamique ont peu évolué. Cette thèse offre une contribution méthodologique dans ce domaine en proposant un cadre statistique, basé sur la régression, permettant de mieux mesurer cette efficacité.
Resumo:
Le suivi thérapeutique est recommandé pour l’ajustement de la dose des agents immunosuppresseurs. La pertinence de l’utilisation de la surface sous la courbe (SSC) comme biomarqueur dans l’exercice du suivi thérapeutique de la cyclosporine (CsA) dans la transplantation des cellules souches hématopoïétiques est soutenue par un nombre croissant d’études. Cependant, pour des raisons intrinsèques à la méthode de calcul de la SSC, son utilisation en milieu clinique n’est pas pratique. Les stratégies d’échantillonnage limitées, basées sur des approches de régression (R-LSS) ou des approches Bayésiennes (B-LSS), représentent des alternatives pratiques pour une estimation satisfaisante de la SSC. Cependant, pour une application efficace de ces méthodologies, leur conception doit accommoder la réalité clinique, notamment en requérant un nombre minimal de concentrations échelonnées sur une courte durée d’échantillonnage. De plus, une attention particulière devrait être accordée à assurer leur développement et validation adéquates. Il est aussi important de mentionner que l’irrégularité dans le temps de la collecte des échantillons sanguins peut avoir un impact non-négligeable sur la performance prédictive des R-LSS. Or, à ce jour, cet impact n’a fait l’objet d’aucune étude. Cette thèse de doctorat se penche sur ces problématiques afin de permettre une estimation précise et pratique de la SSC. Ces études ont été effectuées dans le cadre de l’utilisation de la CsA chez des patients pédiatriques ayant subi une greffe de cellules souches hématopoïétiques. D’abord, des approches de régression multiple ainsi que d’analyse pharmacocinétique de population (Pop-PK) ont été utilisées de façon constructive afin de développer et de valider adéquatement des LSS. Ensuite, plusieurs modèles Pop-PK ont été évalués, tout en gardant à l’esprit leur utilisation prévue dans le contexte de l’estimation de la SSC. Aussi, la performance des B-LSS ciblant différentes versions de SSC a également été étudiée. Enfin, l’impact des écarts entre les temps d’échantillonnage sanguins réels et les temps nominaux planifiés, sur la performance de prédiction des R-LSS a été quantifié en utilisant une approche de simulation qui considère des scénarios diversifiés et réalistes représentant des erreurs potentielles dans la cédule des échantillons sanguins. Ainsi, cette étude a d’abord conduit au développement de R-LSS et B-LSS ayant une performance clinique satisfaisante, et qui sont pratiques puisqu’elles impliquent 4 points d’échantillonnage ou moins obtenus dans les 4 heures post-dose. Une fois l’analyse Pop-PK effectuée, un modèle structural à deux compartiments avec un temps de délai a été retenu. Cependant, le modèle final - notamment avec covariables - n’a pas amélioré la performance des B-LSS comparativement aux modèles structuraux (sans covariables). En outre, nous avons démontré que les B-LSS exhibent une meilleure performance pour la SSC dérivée des concentrations simulées qui excluent les erreurs résiduelles, que nous avons nommée « underlying AUC », comparée à la SSC observée qui est directement calculée à partir des concentrations mesurées. Enfin, nos résultats ont prouvé que l’irrégularité des temps de la collecte des échantillons sanguins a un impact important sur la performance prédictive des R-LSS; cet impact est en fonction du nombre des échantillons requis, mais encore davantage en fonction de la durée du processus d’échantillonnage impliqué. Nous avons aussi mis en évidence que les erreurs d’échantillonnage commises aux moments où la concentration change rapidement sont celles qui affectent le plus le pouvoir prédictif des R-LSS. Plus intéressant, nous avons mis en exergue que même si différentes R-LSS peuvent avoir des performances similaires lorsque basées sur des temps nominaux, leurs tolérances aux erreurs des temps d’échantillonnage peuvent largement différer. En fait, une considération adéquate de l'impact de ces erreurs peut conduire à une sélection et une utilisation plus fiables des R-LSS. Par une investigation approfondie de différents aspects sous-jacents aux stratégies d’échantillonnages limités, cette thèse a pu fournir des améliorations méthodologiques notables, et proposer de nouvelles voies pour assurer leur utilisation de façon fiable et informée, tout en favorisant leur adéquation à la pratique clinique.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
A socio-economic research is required as an attempt to address the socio-economic issues facing small-scale fisheries. A study of the socio economic conditions of small-scale fishermen is a prerequisite for good design and successful implementation of effective assistance Programmes. It will provide an overall pidure of the structure, activities and standards of living of small-scale fisherfolk The study is confined to the coastal districts of Ernakulam, Thrissur and Malappuram districts. It also gives a picture of socio-economic conditions of the fisher folk in the study area. The variables that may depict the standard of living of the small-scale fisherfolk are occupational structure, family size, age structure, income, expenditure, education, housing and other social amenities. It attempts to see the asset creation of the fisherfolk with the help of government agencies, and the nature of savings and expenditure pattern of the fisherfolk. It also provides a picture of the indebtedness of the fisherfolk in the study area. The study analyses the schemes implemented by the government through its agencies, like Fisheries Department, Matsyaboard, and Matsyafed; and the awareness of fisherfolk regarding these schemes, their attitude and reactions, the extent of accessibility, and the viability of the schemes.
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
The Khaling Rai live in a remote area of the mountain region of Nepal. Subsistence farming is central to their livelihood strategy, the sustainability of which was examined in this study. The sustainable livelihood approach was identified as a suitable theoretical framework to analyse the assets of the Khaling Rai. A baseline study was conducted using indicators to assess the outcome of the livelihood strategies under the three pillars of sustainability – economic, social and environmental. Relationships between key factors were analysed. The outcome showed that farming fulfils their basic need of food security, with self-sufficiency in terms of seeds, organic fertilisers and tools. Agriculture is almost totally non-monitized: crops are grown mainly for household consumption. However, the crux faced by the Khaling Rai community is the need to develop high value cash crops in order to improve their livelihoods while at the same time maintaining food security. Institutional support in this regard was found to be lacking. At the same time there is declining soil fertility and an expanding population, which results in smaller land holdings. The capacity to absorb risk is inhibited by the small size of the resource base and access only to small local markets. A two-pronged approach is recommended. Firstly, the formation of agricultural cooperative associations in the area. Secondly, through them the selection of key personnel to be put forward for training in the adoption of improved low-cost technologies for staple crops and in the introduction of appropriate new cash crops.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
In Metropolitan Area of Mexico City, most of urban displacements happen through semi formal public transportation: small and medium capacity vehicles operated by small private enterprises, through a concession scheme. This kind of public transportation has been playing a major role in the Mexican capital. On one hand, it has been one of the conditions for urbanization to be possible. On the other hand, despite its uncountable deficiencies, public transportation has allowed for a long time the whole population to be able to move within this huge metropolis. However, that important function with regards to integration has now reached its limits in the most recent suburbs of the city, where a new mode of urbanization is taking place, based on massive production of very big social housing gated settlements. Public transportation tends to constitute here a factor of exclusion and households meet with important difficulties for their daily mobility.
Resumo:
Large scale image mosaicing methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the difficulties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman filter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a different solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the efficiency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios
Resumo:
La desertificació és un problema de degradació de sòls de gran importància en regions àrides, semi-àrides i sub-humides, amb serioses conseqüències ambientals, socials i econòmiques com a resultat de l'impacte d'activitats humanes en combinació amb condicions físiques i medi ambientals desfavorables (UNEP, 1994). L'objectiu principal d'aquesta tesi va ser el desenvolupament d'una metodologia simple per tal de poder avaluar de forma precisa l'estat i l'evolució de la desertificació a escala local, a través de la creació d'un model anomenat sistema d'indicators de desertificació (DIS). En aquest mateix context, un dels dos objectius específics d'aquesta recerca es va centrar en l'estudi dels factors més importants de degradació de sòls a escala de parcel.la, comportant un extens treball de camp, analisi de laboratori i la corresponent interpretació i discussió dels resultats obtinguts. El segon objectiu específic es va basar en el desenvolupament i aplicació del DIS. L'àrea d'estudi seleccionada va ser la conca de la Serra de Rodes, un ambient típic Mediterràni inclòs en el Parc Natural del Cap de Creus, NE Espanya, el qual ha estat progressivament abandonat pels agricultors durant el segle passat. Actualment, els incendis forestals així com el canvi d'ús del sòl i especialment l'abandonament de terres són considerats els problemes ambientals més importants a l'àrea d'estudi (Dunjó et al., 2003). En primer lloc, es va realitzar l'estudi dels processos i causes de la degradació dels sòls a l'àrea d'interés. En base a aquest coneixement, es va dur a terme la identificació i selecció dels indicadors de desertificació més rellevants. Finalment, els indicadors de desertificació seleccionats a escala de conca, incloent l'erosió del sòl i l'escolament superficial, es van integrar en un model espaial de procés. Ja que el sòl és considerat el principal indicador dels processos d'erosió, segons la FAO/UNEP/UNESCO (1979), tant el paisatge original així com els dos escenaris d'ús del sòl desenvolupats, un centrat en el cas hipotétic del pas d'un incendi forestal, i l'altre un paisatge completament cultivat, poden ser ambients classificats sota baixa o moderada degradació. En comparació amb l'escenari original, els dos escenaris creats van revelar uns valors més elevats d'erosió i escolament superficial, i en particular l'escenari cultivat. Per tant, aquests dos hipotètic escenaris no semblen ser una alternativa sostenible vàlida als processos de degradació que es donen a l'àrea d'estudi. No obstant, un ampli ventall d'escenaris alternatius poden ser desenvolupats amb el DIS, tinguent en compte les polítiques d'especial interés per la regió de manera que puguin contribuir a determinar les conseqüències potencials de desertificació derivades d'aquestes polítiques aplicades en aquest escenari tan complexe espaialment. En conclusió, el model desenvolupat sembla ser un sistema força acurat per la identificació de riscs presents i futurs, així com per programar efectivament mesures per combatre la desertificació a escala de conca. No obstant, aquesta primera versió del model presenta varies limitacions i la necessitat de realitzar més recerca en cas de voler desenvolupar una versió futura i millor del DIS.
Resumo:
Habitat area requirements of forest songbirds vary greatly among species, but the causes of this variation are not well understood. Large area requirements could result from advantages for certain species when settling their territories near those of conspecifics. This phenomenon would result in spatial aggregations much larger than single territories. Species that aggregate their territories could show reduced population viability in highly fragmented forests, since remnant patches may remain unoccupied if they are too small to accommodate several territories. The objectives of this study were twofold: (1) to seek evidence of territory clusters of forest birds at various spatial scales, lags of 250-550 m, before and after controlling for habitat spatial patterns; and (2) to measure the relationship between spatial autocorrelation and apparent landscape sensitivity for these species. In analyses that ignored spatial variation of vegetation within remnant forest patches, nine of the 17 species studied significantly aggregated their territories within patches. After controlling for forest vegetation, the locations of eight out of 17 species remained significantly clustered. The aggregative pattern that we observed may, thus, be indicative of a widespread phenomenon in songbird populations. Furthermore, there was a tendency for species associated with higher forest cover to be more spatially aggregated [ERRATUM].
Resumo:
Worldwide marine protected areas (MPAs) have been designated to protect marine resources, including top predators such as seabirds. There is no conclusive information on whether protected areas can improve population trends of seabirds when these are further exploited as tourist attractions, an activity that has increased in past decades. Humboldt Penguins (Spheniscus humboldti) and Magellanic Penguins (S. magellanicus) breed sympatrically on Puñihuil Islets, two small coastal islands off the west coast of Chiloé Island (41° S) in southern Chile that are subject to exploitation for tourism. Our goal was to compare the population size of the mixed colony of Humboldt and Magellanic Penguins before and after protection from unregulated tourism and freely roaming goats in 1997. For this purpose, two censuses were conducted in 2004 and 2008, and the numbers compared with those obtained in 1997 by other authors. The proportion of occupied, unoccupied, and collapsed/flooded burrows changed between years; there were 68% and 34% fewer collapsed burrows in 2004 and 2008, respectively, than in 1997. For the total number of burrows of both species, we counted 48% and 63% more burrows in 2004 and 2008, respectively, than in 1997. We counted 13% more burrows of Humboldt Penguins in 2008 than in 1997, and for Magellanic Penguins, we estimated a 64% increase in burrows in 2008. Presumably, this was as a result of habitat improvement attributable to the exclusion of tourists and the removal of goats from the islets. Although tourist visits to the islets are prohibited, tourism activities around the colonies are prevalent and need to be taken into account to promote appropriate management.
Resumo:
Analysis of the vertical velocity of ice crystals observed with a 1.5micron Doppler lidar from a continuous sample of stratiform ice clouds over 17 months show that the distribution of Doppler velocity varies strongly with temperature, with mean velocities of 0.2m/s at -40C, increasing to 0.6m/s at -10C due to particle growth and broadening of the size spectrum. We examine the likely influence of crystals smaller than 60microns by forward modelling their effect on the area-weighted fall speed, and comparing the results to the lidar observations. The comparison strongly suggests that the concentration of small crystals in most clouds is much lower than measured in-situ by some cloud droplet probes. We argue that the discrepancy is likely due to shattering of large crystals on the probe inlet, and that numerous small particles should not be included in numerical weather and climate model parameterizations.