601 resultados para Estimators
Resumo:
El objetivo de este trabajo fue evaluar la supervivencia, evolución de las alturas y áreas basales de rebrotes de clones de Populus spp. de diferentes procedencias implantados en Argiudoles típicos del borde Sur de la Pampa Ondulada, Buenos Aires, Argentina (34°55' S; 57°57' W; 15 m snm). Los clones evaluados fueron ‘Delta Gold’, ‘Stoneville 71’, ‘Catfish 2’, ‘Harvard’, ‘Onda’ e ‘I-74/51’. Se compararon, para el conjunto de clones, los comportamientos para el primero y segundo corte. Se realizó una evaluación de los resultados clonales al segundo turno de los valores dasométricos logrados. Los valores anuales en área basal individual media y las alturas totales medias observados desde el 2° al 8° año con los obtenidos al año 9, se correlacionaron año a año mediante un modelo lineal. Se observó una prevalencia de los clones de procedencia de los Estados Unidos. Las alturas logradas al primer turno fueron significativamente mayores que las del segundo turno, en tanto los valores en área basal resultaron mayores en la segunda cosecha que en la primera. Los coeficientes de correlación fueron significativos a partir del cuarto año; esta relación temprana permitiría la selección anticipada de los parámetros de crecimiento para el régimen de tallar.
Resumo:
We present a 15 kyr sea surface temperature (SST) record for a high sedimentation rate core (KNR51-29GGC) from the Feni Drift off of Ireland, based on an organic geochemical technique for paleotemperature estimation, U37 K'. We compare the U37 K' temperature record to planktonic foraminiferal delta18O and foraminiferal assemblage SST estimates from the same sample horizons. U37 K' gives SST estimates of 13°C for the early deglacial and 18°C for the Holocene and Recent, whereas assemblages give estimates of 9°C and 13°C, respectively. As in nearby core V23-81, we find Ash Zone 1, the Younger Dryas increase in Neogloboquadrina pachyderma sinistral abundance, and maximum abundance of this species during glaciation. N. pachyderma dextral oxygen isotopic analyses have a late glacial to interglacial range of 1.5 per mil. A reduction of about 1 per mil in delta18O occurred at about 12 ka, whereas U37 K' and the foraminiferal fauna indicate a 2°C warming. This implies a 0.9 per mil salinity effect on delta18O which we attribute to meltwater freshening. All three parameters indicate cooling during the Younger Dryas. U37 K' SST estimates show that the major shift from deglacial to interglacial temperatures occurred after the Younger Dryas in termination 1b, in contrast to the assemblage data, which show this jump in SST at the end of the glaciation during termination Ia. Differences between the two SST estimators, which may result from their different (floral versus faunal) sources, are more pronounced between transitions Ia and Ib. This may reflect different habitats under the unusual sea surface conditions of the deglaciation.
Resumo:
Lichens, symbiotic associations of fungi (mycobionts) and green algae or cyanobacteria (photobionts), are poikilohydric organisms that are particularly well adapted to withstand adverse environmental conditions. Terrestrial ecosystems of the Antarctic are therefore largely dominated by lichens. The effects of global climate change are especially pronounced in the maritime Antarctic and it may be assumed that the lichen vegetation will profoundly change in the future. The genetic diversity of populations is closely correlated to their ability to adapt to changing environmental conditions and to their future evolutionary potential. In this study, we present evidence for low genetic diversity in Antarctic mycobiont and photobiont populations of the widespread lichen Cetraria aculeata. We compared between 110 and 219 DNA sequences from each of three gene loci for each symbiont. A total of 222 individuals from three Antarctic and nine antiboreal, temperate and Arctic populations were investigated. The mycobiont diversity is highest in Arctic populations, while the photobionts are most diverse in temperate regions. Photobiont diversity decreases significantly towards the Antarctic but less markedly towards the Arctic, indicating that ecological factors play a minor role in determining the diversity of Antarctic photobiont populations. Richness estimators calculated for the four geographical regions suggest that the low genetic diversity of Antarctic populations is not a sampling artefact. Cetraria aculeata appears to have diversified in the Arctic and subsequently expanded its range into the Southern Hemisphere. The reduced genetic diversity in the Antarctic is most likely due to founder effects during long-distance colonization.
Resumo:
The decomposition technique introduced by Blinder (1973) and Oaxaca (1973) is widely used to study outcome differences between groups. For example, the technique is commonly applied to the analysis of the gender wage gap. However, despite the procedure's frequent use, very little attention has been paid to the issue of estimating the sampling variances of the decomposition components. We therefore suggest an approach that introduces consistent variance estimators for several variants of the decomposition. The accuracy of the new estimators under ideal conditions is illustrated with the results of a Monte Carlo simulation. As a second check, the estimators are compared to bootstrap results obtained using real data. In contrast to previously proposed statistics, the new method takes into account the extra variation imposed by stochastic regressors.
Resumo:
In the present global era in which firms choose the location of their plants beyond national borders, location characteristics are important for attracting multinational enterprises (MNEs). The better access to countries with large market is clearly attractive for MNEs. For example, special treatments on tariffs such as the Generalized System of Preferences (GSP) are beneficial for MNEs whose home country does not have such treatments. Not only such country characteristics but also region characteristics (i.e. province-level or city-level ones) matter, particularly in the case that location characteristics differ widely between a nation's regions. The existence of industrial concentration, that is, agglomeration, is a typical regional characteristic. It is with consideration of these country-level and region-level characteristics that MNEs decide their location abroad. A large number of academic studies have investigated in what kinds of countries MNEs locate, i.e. location choice analysis. Employing the usual new economic geography model (i.e. constant elasticity of substitution (CES) utility function, Dixit-Stiglitz monopolistic competition, and ice-berg trade costs), the literature derives the profit function, of which coefficients are estimated using maximum likelihood procedures. Recent studies are as follows: Head, Rise, and Swenson (1999) for Japanese MNEs in the US; Belderbos and Carree (2002) for Japanese MNEs in China; Head and Mayer (2004) for Japanese MNEs in Europe; Disdier and Mayer (2004) for French MNEs in Europe; Castellani and Zanfei (2004) for large MNEs worldwide; Mayer, Mejean, and Nefussi (2007) for French MNEs worldwide; Crozet, Mayer, and Mucchielli (2004) for MNEs in France; and Basile, Castellani, and Zanfei (2008) for MNEs in Europe. At the present time, three main topics can be found in this literature. The first introduces various location elements as independent variables. The above-mentioned new economic geography model usually yields the profit function, which is a function of market size, productive factor prices, price of intermediate goods, and trade costs. As a proxy for the price of intermediate goods, the measure of agglomeration is often used, particularly the number of manufacturing firms. Some studies employ more disaggregated numbers of manufacturing firms, such as the number of manufacturing firms with the same nationality as the firms choosing the location (e.g., Head et al., 1999; Crozet et al., 2004) or the number of firms belonging to the same firm group (e.g., Belderbos and Carree, 2002). As part of trade costs, some investment climate measures have been examined: free trade zones in the US (Head et al., 1999), special economic zones and opening coastal cities in China (Belderbos and Carree, 2002), and Objective 1 structural funds and cohesion funds in Europe (Basile et al., 2008). Second, the validity of proxy variables for location elements is further examined. Head and Mayer (2004) examine the validity of market potential on location choice. They propose the use of two measures: the Harris market potential index (Harris, 1954) and the Krugman-type index used in Redding and Venables (2004). The Harris-type index is simply the sum of distance-weighted real GDP. They employ the Krugman-type market potential index, which is directly derived from the new economic geography model, as it takes into account the extent of competition (i.e. price index) and is constructed using estimators of importing country dummy variables in the well-known gravity equation, as in Redding and Venables (2004). They find that "theory does not pay", in the sense that the Harris market potential outperforms Krugman's market potential in both the magnitude of its coefficient and the fit of the model to be estimated. The third topic explores the substitution of location by examining inclusive values in the nested-logit model. For example, using firm-level data on French investments both in France and abroad over the 1992-2002 period, Mayer et al. (2007) investigate the determinants of location choice and assess empirically whether the domestic economy has been losing attractiveness over the recent period or not. The estimated coefficient for inclusive value is strongly significant and near unity, indicating that the national economy is not different from the rest of the world in terms of substitution patterns. Similarly, Disdier and Mayer (2004) investigate whether French MNEs consider Western and Eastern Europe as two distinct groups of potential host countries by examining the coefficient for the inclusive value in nested-logit estimation. They confirm the relevance of an East-West structure in the country location decision and furthermore show that this relevance decreases over time. The purpose of this paper is to investigate the location choice of Japanese MNEs in Thailand, Cambodia, Laos, Myanmar, and Vietnam, and is closely related to the third topic mentioned above. By examining region-level location choice with the nested-logit model, I investigate the relative importance of not only country characteristics but also region characteristics. Such investigation is invaluable particularly in the case of location choice in those five countries: industrialization remains immature in those countries which have not yet succeeded in attracting enough MNEs, and as a result, it is expected that there are not yet crucial regional variations for MNEs within such a nation, meaning the country characteristics are still relatively important to attract MNEs. To illustrate, in the case of Cambodia and Laos, one of the crucial elements for Japanese MNEs would be that LDC preferential tariff schemes are available for exports from Cambodia and Laos. On the other hand, in the case of Thailand and Vietnam, which have accepted a relatively large number of MNEs and thus raised the extent of regional inequality, regional characteristics such as the existence of agglomeration would become important elements in location choice. Our sample countries seem, therefore, to offer rich variations for analyzing the relative importance between country characteristics and region characteristics. Our empirical strategy has a further advantage. As in the third topic in the location choice literature, the use of the nested-logit model enables us to examine substitution patterns between country-based and region-based location decisions by MNEs in the concerned countries. For example, it is possible to investigate empirically whether Japanese multinational firms consider Thailand/Vietnam and the other three countries as two distinct groups of potential host countries, by examining the inclusive value parameters in nested-logit estimation. In particular, our sample countries all experienced dramatic changes in, for example, economic growth or trade costs reduction during the sample period. Thus, we will find the dramatic dynamics of such substitution patterns. Our rigorous analysis of the relative importance between country characteristics and region characteristics is invaluable from the viewpoint of policy implications. First, while the former characteristics should be improved mainly by central government in each country, there is sometimes room for the improvement of the latter characteristics by even local governments or smaller institutions such as private agencies. Consequently, it becomes important for these smaller institutions to know just how crucial the improvement of region characteristics is for attracting foreign companies. Second, as economies grow, country characteristics become similar among countries. For example, the LCD preferential tariff schemes are available only when a country is less developed. Therefore, it is important particularly for the least developed countries to know what kinds of regional characteristics become important following economic growth; in other words, after their country characteristics become similar to those of the more developed countries. I also incorporate one important characteristic of MNEs, namely, productivity. The well-known Helpman-Melitz-Yeaple model indicates that only firms with higher productivity can afford overseas entry (Helpman et al., 2004). Beyond this argument, there may be some differences in MNEs' productivity among our sample countries and regions. Such differences are important from the viewpoint of "spillover effects" from MNEs, which are one of the most important results for host countries in accepting their entry. The spillover effects are that the presence of inward foreign direct investment (FDI) aises domestic firms' productivity through various channels such as imitation. Such positive effects might be larger in areas with more productive MNEs. Therefore, it becomes important for host countries to know how much productive firms are likely to invest in them. The rest of this paper is organized as follows. Section 2 takes a brief look at the worldwide distribution of Japanese overseas affiliates. Section 3 provides an empirical model to examine their location choice, and lastly, we discuss future works to estimate our model.
Resumo:
El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.
Resumo:
The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.
Resumo:
Real-time monitoring of multimedia Quality of Experience is a critical task for the providers of multimedia delivery services: from television broadcasters to IP content delivery networks or IPTV. For such scenarios, meaningful metrics are required which can generate useful information to the service providers that overcome the limitations of pure Quality of Service monitoring probes. However, most of objective multimedia quality estimators, aimed at modeling the Mean Opinion Score, are difficult to apply to massive quality monitoring. Thus we propose a lightweight and scalable monitoring architecture called Qualitative Experience Monitoring (QuEM), based on detecting identifiable impairment events such as the ones reported by the customers of those services. We also carried out a subjective assessment test to validate the approach and calibrate the metrics. Preliminary results of this test set support our approach.
Resumo:
This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k-means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k-means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators
Resumo:
Con el surgir de los problemas irresolubles de forma eficiente en tiempo polinomial en base al dato de entrada, surge la Computación Natural como alternativa a la computación clásica. En esta disciplina se trata de o bien utilizar la naturaleza como base de cómputo o bien, simular su comportamiento para obtener mejores soluciones a los problemas que los encontrados por la computación clásica. Dentro de la computación natural, y como una representación a nivel celular, surge la Computación con Membranas. La primera abstracción de las membranas que se encuentran en las células, da como resultado los P sistemas de transición. Estos sistemas, que podrían ser implementados en medios biológicos o electrónicos, son la base de estudio de esta Tesis. En primer lugar, se estudian las implementaciones que se han realizado, con el fin de centrarse en las implementaciones distribuidas, que son las que pueden aprovechar las características intrínsecas de paralelismo y no determinismo. Tras un correcto estudio del estado actual de las distintas etapas que engloban a la evolución del sistema, se concluye con que las distribuciones que buscan un equilibrio entre las dos etapas (aplicación y comunicación), son las que mejores resultados presentan. Para definir estas distribuciones, es necesario definir completamente el sistema, y cada una de las partes que influyen en su transición. Además de los trabajos de otros investigadores, y junto a ellos, se realizan variaciones a los proxies y arquitecturas de distribución, para tener completamente definidos el comportamiento dinámico de los P sistemas. A partir del conocimiento estático –configuración inicial– del P sistema, se pueden realizar distribuciones de membranas en los procesadores de un clúster para obtener buenos tiempos de evolución, con el fin de que la computación del P sistema sea realizada en el menor tiempo posible. Para realizar estas distribuciones, hay que tener presente las arquitecturas –o forma de conexión– de los procesadores del clúster. La existencia de 4 arquitecturas, hace que el proceso de distribución sea dependiente de la arquitectura a utilizar, y por tanto, aunque con significativas semejanzas, los algoritmos de distribución deben ser realizados también 4 veces. Aunque los propulsores de las arquitecturas han estudiado el tiempo óptimo de cada arquitectura, la inexistencia de distribuciones para estas arquitecturas ha llevado a que en esta Tesis se probaran las 4, hasta que sea posible determinar que en la práctica, ocurre lo mismo que en los estudios teóricos. Para realizar la distribución, no existe ningún algoritmo determinista que consiga una distribución que satisfaga las necesidades de la arquitectura para cualquier P sistema. Por ello, debido a la complejidad de dicho problema, se propone el uso de metaheurísticas de Computación Natural. En primer lugar, se propone utilizar Algoritmos Genéticos, ya que es posible realizar alguna distribución, y basada en la premisa de que con la evolución, los individuos mejoran, con la evolución de dichos algoritmos, las distribuciones también mejorarán obteniéndose tiempos cercanos al óptimo teórico. Para las arquitecturas que preservan la topología arbórea del P sistema, han sido necesarias realizar nuevas representaciones, y nuevos algoritmos de cruzamiento y mutación. A partir de un estudio más detallado de las membranas y las comunicaciones entre procesadores, se ha comprobado que los tiempos totales que se han utilizado para la distribución pueden ser mejorados e individualizados para cada membrana. Así, se han probado los mismos algoritmos, obteniendo otras distribuciones que mejoran los tiempos. De igual forma, se han planteado el uso de Optimización por Enjambres de Partículas y Evolución Gramatical con reescritura de gramáticas (variante de Evolución Gramatical que se presenta en esta Tesis), para resolver el mismo cometido, obteniendo otro tipo de distribuciones, y pudiendo realizar una comparativa de las arquitecturas. Por último, el uso de estimadores para el tiempo de aplicación y comunicación, y las variaciones en la topología de árbol de membranas que pueden producirse de forma no determinista con la evolución del P sistema, hace que se deba de monitorizar el mismo, y en caso necesario, realizar redistribuciones de membranas en procesadores, para seguir obteniendo tiempos de evolución razonables. Se explica, cómo, cuándo y dónde se deben realizar estas modificaciones y redistribuciones; y cómo es posible realizar este recálculo. Abstract Natural Computing is becoming a useful alternative to classical computational models since it its able to solve, in an efficient way, hard problems in polynomial time. This discipline is based on biological behaviour of living organisms, using nature as a basis of computation or simulating nature behaviour to obtain better solutions to problems solved by the classical computational models. Membrane Computing is a sub discipline of Natural Computing in which only the cellular representation and behaviour of nature is taken into account. Transition P Systems are the first abstract representation of membranes belonging to cells. These systems, which can be implemented in biological organisms or in electronic devices, are the main topic studied in this thesis. Implementations developed in this field so far have been studied, just to focus on distributed implementations. Such distributions are really important since they can exploit the intrinsic parallelism and non-determinism behaviour of living cells, only membranes in this case study. After a detailed survey of the current state of the art of membranes evolution and proposed algorithms, this work concludes that best results are obtained using an equal assignment of communication and rules application inside the Transition P System architecture. In order to define such optimal distribution, it is necessary to fully define the system, and each one of the elements that influence in its transition. Some changes have been made in the work of other authors: load distribution architectures, proxies definition, etc., in order to completely define the dynamic behaviour of the Transition P System. Starting from the static representation –initial configuration– of the Transition P System, distributions of membranes in several physical processors of a cluster is algorithmically done in order to get a better performance of evolution so that the computational complexity of the Transition P System is done in less time as possible. To build these distributions, the cluster architecture –or connection links– must be considered. The existence of 4 architectures, makes that the process of distribution depends on the chosen architecture, and therefore, although with significant similarities, the distribution algorithms must be implemented 4 times. Authors who proposed such architectures have studied the optimal time of each one. The non existence of membrane distributions for these architectures has led us to implement a dynamic distribution for the 4. Simulations performed in this work fix with the theoretical studies. There is not any deterministic algorithm that gets a distribution that meets the needs of the architecture for any Transition P System. Therefore, due to the complexity of the problem, the use of meta-heuristics of Natural Computing is proposed. First, Genetic Algorithm heuristic is proposed since it is possible to make a distribution based on the premise that along with evolution the individuals improve, and with the improvement of these individuals, also distributions enhance, obtaining complexity times close to theoretical optimum time. For architectures that preserve the tree topology of the Transition P System, it has been necessary to make new representations of individuals and new algorithms of crossover and mutation operations. From a more detailed study of the membranes and the communications among processors, it has been proof that the total time used for the distribution can be improved and individualized for each membrane. Thus, the same algorithms have been tested, obtaining other distributions that improve the complexity time. In the same way, using Particle Swarm Optimization and Grammatical Evolution by rewriting grammars (Grammatical Evolution variant presented in this thesis), to solve the same distribution task. New types of distributions have been obtained, and a comparison of such genetic and particle architectures has been done. Finally, the use of estimators for the time of rules application and communication, and variations in tree topology of membranes that can occur in a non-deterministic way with evolution of the Transition P System, has been done to monitor the system, and if necessary, perform a membrane redistribution on processors to obtain reasonable evolution time. How, when and where to make these changes and redistributions, and how it can perform this recalculation, is explained.
Resumo:
Experimental methods based on single particle tracking (SPT) are being increasingly employed in the physical and biological sciences, where nanoscale objects are visualized with high temporal and spatial resolution. SPT can probe interactions between a particle and its environment but the price to be paid is the absence of ensemble averaging and a consequent lack of statistics. Here we address the benchmark question of how to accurately extract the diffusion constant of one single Brownian trajectory. We analyze a class of estimators based on weighted functionals of the square displacement. For a certain choice of the weight function these functionals provide the true ensemble averaged diffusion coefficient, with a precision that increases with the trajectory resolution.
Resumo:
This paper presents the implementation of an adaptive philosophy to plane potential problems, using the direct boundary element method. After some considerations about the state of the art and a discussion of the standard approach features, the possibility of separately treating the modelling of variables and their interpolation through hierarchical shape functions is analysed. Then the proposed indicators and estimators are given, followed by a description of a small computer program written for an IBM PC. Finally, some examples show the kind of results to be expected.
Resumo:
Sequential estimation of the success probability p in inverse binomial sampling is considered in this paper. For any estimator pˆ , its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters a and b for pˆ
p , respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as p→0, and which guarantee that, for any p∈(0,1), the risk is lower than its asymptotic value. This allows selecting the required number of successes, r, to meet a prescribed quality irrespective of the unknown p. In addition, the proposed estimators are shown to be approximately minimax when a/b does not deviate too much from 1, and asymptotically minimax as r→∞ when a=b.
Resumo:
Evaluating the seismic hazard requires establishing a distribution of the seismic activity rate, irrespective of the methodology used in the evaluation. In practice, how that activity rate is established tends to be the main difference between the various evaluation methods. The traditional procedure relies on a seismogenic zonation and the Gutenberg-Richter (GR) hypothesis. Competing zonations are often compared looking only at the geometry of the zones, but the resulting activity rate is affected by both geometry and the values assigned to the GR parameters. Contour plots can be used for conducting more meaningful comparisons, providing the GR parameters are suitably normalised. More recent approaches for establishing the seismic activity rate forego the use of zones and GR statistics and special attention is paid here to such procedures. The paper presents comparisons between the local activity rates that result for the complete Iberian Peninsula using kernel estimators as well as two seismogenic zonations. It is concluded that the smooth variation of the seismic activity rate produced by zoneless methods is more realistic than the stepwise changes associated with zoned approaches; moreover, the choice of zonation often has a stronger influence on the results than its fairly subjective origin would warrant. It is also observed that the activity rate derived from the kernel approach, related with the GR parameter “a”, is qualitatively consistent with the epicentres in the catalogue. Finally, when comparing alternative zonations it is not just their geometry but the distribution of activity rate that should be compared.
Resumo:
Sequential estimation of the success probability $p$ in inverse binomial sampling is considered in this paper. For any estimator $\hatvap$, its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters $a$ and $b$ for $\hatvap < p$ and $\hatvap > p$ respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as $p \rightarrow 0$, and which guarantee that, for any $p \in (0,1)$, the risk is lower than its asymptotic value. This allows selecting the required number of successes, $\nnum$, to meet a prescribed quality irrespective of the unknown $p$. In addition, the proposed estimators are shown to be approximately minimax when $a/b$ does not deviate too much from $1$, and asymptotically minimax as $\nnum \rightarrow \infty$ when $a=b$.