936 resultados para functional-form group
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
Abundant and diverse polycystine radiolarian faunas from ODP Leg 181, Site 1123 (0-1.2 Ma at ~21 kyr resolution) and Site 1124 (0-0.6 Ma, ~5 kyr resolution, with a disconformity between 0.42-0.22 Ma) have been used to infer Pleistocene-Holocene paleoceanographic changes north of the Subtropical Front (STF), offshore eastern New Zealand, southwest Pacific. The abundance of warm-water taxa relative to cool-water taxa was used to determine a radiolarian paleotemperature index, the Subtropical (ST) Index. ST Index variations show strong covariance with benthic foraminifera oxygen isotope records from Site 1123 and exhibit similar patterns through Glacial-Interglacial (G-I) cycles of marine isotope stages (MIS) 15-1. At Site 1123, warm-water taxa peak in abundance during Interglacials (reaching ~8% of the total fauna). Within Glacials cool-water taxa increase to ~15% (MIS2) of the fauna. Changes in radiolarian assemblages at Site 1124 indicate similar but much better resolved trends through MIS15-12 and 7-1. Pronounced increases in warm-water taxa occur at the onset of Interglacials (reaching ~15% of the fauna), whereas the abundance of cool-water taxa increases in Glacials peaking in MIS2 (~17% of the fauna). Overall warmer conditions at Site 1124 during the last 600 kyrs indicate sustained influence of the subtropical, warm East Cape Current (ECC). During Interglacials radiolarian assemblages suggest an increase in marine productivity at both sites which might be due to predominance of micronutrient-rich Subtropical Water. At Site 1123, an increased abundance of deep-dwelling taxa in MIS 13 and 9 suggests enhanced vertical mixing. During Glacials, reduced vigour of ECC flow combined with northward expansion of cool, micronutrient-poor Subantarctic Water occurs. Only at Site 1123 there is evidence of a longitudinal shift of the STF, reaching as far north as 41°S.
Resumo:
Almost half of the 4822 described beeflies in the world belong to the subfamily Anthracinae, with most of the diversity found in three cosmopolitan tribes: Villini, Anthracini, and Exoprosopini. The Australian Exoprosopini previously contained three genera, Ligyra Newman, Pseudopenthes Roberts and Exoprosopa Macquart. Pseudopenthes is an Australian endemic, with two species including Ps. hesperis, sp. nov. from Western Australia. Two new species of the exoprosopine Atrichochira Hesse, Atr. commoni, sp. nov. and Atr. paramonovi, sp. nov., are also described from Australia, extending the generic distribution from Africa. Cladistic analysis clarified the phylogenetic relationships between the recognised groups of the Exoprosopini and determined generic limits on a world scale. Inclusion of 18 Australian exoprosopines placed the Australian species in the context of the world fauna. The Exoprosopini contains six large groups. The basal group I contains species previously included in Exoprosopa to which the name Defilippia Lioy is applied. Group II contains Heteralonia Rondani, Atrichochira, Micomitra Bowden, Pseudopenthes, and Diatropomma Bowden. Colossoptera Hull is newly synonymised with Heteralonia. Group III is a paraphyletic assemblage of Pterobates Bezzi and Exoprosopa including the Australian Ex. sylvana ( Fabricius). Ligyra is paraphyletic, forming two well-separated clades. The African clade is described as Euligyra Lambkin, gen. nov., which, together with Litorhina Bezzi and Hyperalonia Rondani, form group IV. The Australian group V is true Ligyra. The remaining monophyletic lineage of exoprosopines, group VI, the Balaana-group of genera, shows evidence of an evolutionary radiation of beeflies in semi-arid Australia. Phylogenetic analysis of all 42 species of the Balaana-group of genera formed a basis for delimiting genera. Seven new genera are described by Lambkin & Yeates: Balaana, Kapua, Larrpana, Munjua, Muwarna, Palirika and Wurda. Four non-Australian species belong to Balaana. Thirty two new Australian species are described: Bal. abscondita, Bal. bicuspis, Bal. centrosa, Bal. gigantea, Bal. kingcascadensis, K. corusca, K. irwini, K. westralica, Lar. collessi, Lar. zwicki, Mun. erugata, Mun. lepidokingi, Mun. paralutea, Mun. trigona, Muw. vitreilinearis, Pa. anaxios, Pa. basilikos, Pa. blackdownensis, Pa. bouchardi, Pa. cyanea, Pa. danielsi, Pa. decora, Pa. viridula, Pa. whyalla, W. emu, W. impatientis, W. montebelloensis, W. norrisi, W. patrellia, W. skevingtoni, W. windorah, and W. wyperfeldensis. The following new combinations are proposed: from Colossoptera: Heteralonia latipennis (Brunetti); from Exoprosopa: Bal. grandis (Pallas), Bal. efflatounbeyi (Paramonov), Bal. latelimbata ( Bigot), Bal. obliquebifasciata ( Macquart), Bal. tamerlan (Portschinsky), Bal. onusta ( Walker), Def. busiris (Jaennicke), Def. efflatouni ( Bezzi), Def. eritreae (Greathead), Def. gentilis ( Bezzi), Def. luteicosta ( Bezzi), Def. minos (Meigen), Def. nigrifimbriata ( Hesse), Def. rubescens ( Bezzi), K. adelaidica ( Macquart), Lar. dimidiatipennis ( Bowden), Muw. stellifera ( Walker), and Pa. marginicollis ( Gray); from Ligyra: Eu. enderleini ( Paramonov), Eu. mars ( Bezzi), Eu. monacha (Klug), Eu. paris ( Bezzi), Eu. sisyphus ( Fabricius), and Eu. venus (Karsch).
Resumo:
The effect of the tumour-forming disease, fibropapillomatosis, on the somatic growth dynamics of green turtles resident in the Pala'au foraging grounds (Moloka'i, Hawai'i) was evaluated using a Bayesian generalised additive mixed modelling approach. This regression model enabled us to account for fixed effects (fibropapilloma tumour severity), nonlinear covariate functional form (carapace size, sampling year) as well as random effects due to individual heterogeneity and correlation between repeated growth measurements on some turtles. Somatic growth rates were found to be nonlinear functions of carapace size and sampling year but were not a function of low-to-moderate tumour severity. On the other hand, growth rates were significantly lower for turtles with advanced fibropapillomatosis, which suggests a limited or threshold-specific disease effect. However, tumour severity was an increasing function of carapace size-larger turtles tended to have higher tumour severity scores, presumably due to longer exposure of larger (older) turtles to the factors that cause the disease. Hence turtles with advanced fibropapillomatosis tended to be the larger turtles, which confounds size and tumour severity in this study. But somatic growth rates for the Pala'au population have also declined since the mid-1980s (sampling year effect) while disease prevalence and severity increased from the mid-1980s before levelling off by the mid-1990s. It is unlikely that this decline was related to the increasing tumour severity because growth rates have also declined over the last 10-20 years for other green turtle populations resident in Hawaiian waters that have low or no disease prevalence. The declining somatic growth rate trends evident in the Hawaiian stock are more likely a density-dependent effect caused by a dramatic increase in abundance by this once-seriously-depleted stock since the mid-1980s. So despite increasing fibropapillomatosis risk over the last 20 years, only a limited effect on somatic growth dynamics was apparent and the Hawaiian green turtle stock continues to increase in abundance.
Resumo:
We have used the Two-Degree Field (2dF) instrument on the Anglo-Australian Telescope (AAT) to obtain redshifts of a sample of z < 3 and 18.0 < g < 21.85 quasars selected from Sloan Digital Sky Survey (SDSS) imaging. These data are part of a larger joint programme between the SDSS and 2dF communities to obtain spectra of faint quasars and luminous red galaxies, namely the 2dF-SDSS LRG and QSO (2SLAQ) Survey. We describe the quasar selection algorithm and present the resulting number counts and luminosity function of 5645 quasars in 105.7 deg(2). The bright-end number counts and luminosity functions agree well with determinations from the 2dF QSO Redshift Survey (2QZ) data to g similar to 20.2. However, at the faint end, the 2SLAQ number counts and luminosity functions are steeper (i.e. require more faint quasars) than the final 2QZ results from Croom et al., but are consistent with the preliminary 2QZ results from Boyle et al. Using the functional form adopted for the 2QZ analysis ( a double power law with pure luminosity evolution characterized by a second-order polynomial in redshift), we find a faint-end slope of beta =-1.78 +/- 0.03 if we allow all of the parameters to vary, and beta =-1.45 +/- 0.03 if we allow only the faint-end slope and normalization to vary (holding all other parameters equal to the final 2QZ values). Over the magnitude range covered by the 2SLAQ survey, our maximum-likelihood fit to the data yields 32 per cent more quasars than the final 2QZ parametrization, but is not inconsistent with other g > 21 deep surveys for quasars. The 2SLAQ data exhibit no well-defined 'break' in the number counts or luminosity function, but do clearly flatten with increasing magnitude. Finally, we find that the shape of the quasar luminosity function derived from 2SLAQ is in good agreement with that derived from Type I quasars found in hard X-ray surveys.
Resumo:
A new diffusion and flow model is presented to describe the behavior of hydrocarbon vapors in activated carbon. The micro/mesopore size distribution (PSD) is obtained according to Do's method which consists of two sequential processes of pore layering and pore filling. This model uses the micro/meso PSD obtained from each adsorbate equilibrium isotherm, which reflects the dynamics behavior of adsorbing molecules through the solid. The initial rise in total permeability is mainly attributed to adsorbed-phase diffusion (that is, surface diffusion), whereas the decrease over reduced pressure of about 0.9 is attributed to the reduction of pore space available for gas phase diffusion and flow. A functional form of surface diffusivity is proposed and validated with experimental data. This model predicts well the permeability of condensable hydrocarbon vapors in activated carbon. (C) 2005 American Institute of Chemical Engineers.
Resumo:
Purpose – The data used in this study is for the period 1980-2000. Almost midway through this period (in 1992), the Kenyan government liberalized the sugar industry and the role of the market increased, while the government's role with respect to control of prices, imports and other aspects in the sector declined. This exposed the local sugar manufacturers to external competition from other sugar producers, especially from the COMESA region. This study aims to find whether there were any changes in efficiency of production between the two periods (pre and post-liberalization). Design/methodology/approach – The study utilized two methodologies to efficiency estimation: data envelopment analysis (DEA) and the stochastic frontier. DEA uses mathematical programming techniques and does not impose any functional form on the data. However, it attributes all deviation from the mean function to inefficiencies. The stochastic frontier utilizes econometric techniques. Findings – The test for structural differences in the two periods does not show any statistically significant differences between the two periods. However, both methodologies show a decline in efficiency levels from 1992, with the lowest period experienced in 1998. From then on, efficiency levels began to increase. Originality/value – To the best of the authors' knowledge, this is the first paper to use both methodologies in the sugar industry in Kenya. It is shown that in industries where the noise (error) term is minimal (such as manufacturing), the DEA and stochastic frontier give similar results.
Resumo:
This article uses a semiparametric smooth coefficient model (SPSCM) to estimate TFP growth and its components (scale and technical change). The SPSCM is derived from a nonparametric specification of the production technology represented by an input distance function (IDF), using a growth formulation. The functional coefficients of the SPSCM come naturally from the model and are fully flexible in the sense that no functional form of the underlying production technology is used to derive them. Another advantage of the SPSCM is that it can estimate bias (input and scale) in technical change in a fully flexible manner. We also used a translog IDF framework to estimate TFP growth components. A panel of U.S. electricity generating plants for the period 1986–1998 is used for this purpose. Comparing estimated TFP growth results from both parametric and semiparametric models against the Divisia TFP growth, we conclude that the SPSCM performs the best in tracking the temporal behavior of TFP growth.
Resumo:
This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.
Resumo:
Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.
Resumo:
This paper proposes a constrained nonparametric method of estimating an input distance function. A regression function is estimated via kernel methods without functional form assumptions. To guarantee that the estimated input distance function satisfies its properties, monotonicity constraints are imposed on the regression surface via the constraint weighted bootstrapping method borrowed from statistics literature. The first, second, and cross partial analytical derivatives of the estimated input distance function are derived, and thus the elasticities measuring input substitutability can be computed from them. The method is then applied to a cross-section of 3,249 Norwegian timber producers.
Resumo:
Aims. The large and small-scale (pc) structure of the Galactic interstellar medium can be investigated by utilising spectra of early-type stellar probes of known distances in the same region of the sky. This paper determines the variation in line strength of Ca ii at 3933.661 Å as a function of probe separation for a large sample of stars, including a number of sightlines in the Magellanic Clouds.
Methods. FLAMES-GIRAFFE data taken with the Very Large Telescope towards early-type stars in 3 Galactic and 4 Magellanic open clusters in Ca ii are used to obtain the velocity, equivalent width, column density, and line width of interstellar Galactic calcium for a total of 657 stars, of which 443 are Magellanic Cloud sightlines. In each cluster there are between 43 and 111 stars observed. Additionally, FEROS and UVES Ca ii K and Na i D spectra of 21 Galactic and 154 Magellanic early-type stars are presented and combined with data from the literature to study the calcium column density - parallax relationship.
Results. For the four Magellanic clusters studied with FLAMES, the strength of the Galactic interstellar Ca ii K equivalent width on transverse scales from ∼0.05-9 pc is found to vary by factors of ∼1.8-3.0, corresponding to column density variations of ∼0.3-0.5 dex in the optically-thin approximation. Using FLAMES, FEROS, and UVES archive spectra, the minimum and maximum reduced equivalent widths for Milky Way gas are found to lie in the range ∼35-125 mÅ and ∼30-160 mÅ for Ca ii K and Na i D, respectively. The range is consistent with a previously published simple model of the interstellar medium consisting of spherical cloudlets of filling factor ∼0.3, although other geometries are not ruled out. Finally, the derived functional form for parallax (π) and Ca ii column density (NCaII) is found to be π(mas) = 1 / (2.39 × 10-13 × NCaII (cm-2) + 0.11). Our derived parallax is ∼25 per cent lower than predicted by Megier et al. (2009, A&A, 507, 833) at a distance of ∼100 pc and ∼15 percent lower at a distance of ∼200 pc, reflecting inhomogeneity in the Ca ii distribution in the different sightlines studied.
Resumo:
A non-linear least-squares methodology for simultaneously estimating parameters of selectivity curves with a pre-defined functional form, across size classes and mesh sizes, using catch size frequency distributions, was developed based on the model of Kirkwood and Walker [Kirkwood, G.P., Walker, T.L, 1986. Gill net selectivities for gummy shark, Mustelus antarcticus Gunther, taken in south-eastern Australian waters. Aust. J. Mar. Freshw. Res. 37, 689-697] and [Wulff, A., 1986. Mathematical model for selectivity of gill nets. Arch. Fish Wiss. 37, 101-106]. Observed catches of fish of size class I in mesh m are modeled as a function of the estimated numbers of fish of that size class in the population and the corresponding selectivities. A comparison was made with the maximum likelihood methodology of [Kirkwood, G.P., Walker, T.I., 1986. Gill net selectivities for gummy shark, Mustelus antarcticus Gunther, taken in south-eastern Australian waters. Aust. J. Mar. Freshw. Res. 37, 689-697] and [Wulff, A., 1986. Mathematical model for selectivity of gill nets. Arch. Fish Wiss; 37, 101-106], using simulated catch data with known selectivity curve parameters, and two published data sets. The estimated parameters and selectivity curves were generally consistent for both methods, with smaller standard errors for parameters estimated by non-linear least-squares. The proposed methodology is a useful and accessible alternative which can be used to model selectivity in situations where the parameters of a pre-defined model can be assumed to be functions of gear size; facilitating statistical evaluation of different models and of goodness of fit. (C) 1998 Elsevier Science B.V.
Resumo:
The investigations of the large-scale structure of our Universe provide us with extremely powerful tools to shed light on some of the open issues of the currently accepted Standard Cosmological Model. Until recently, constraining the cosmological parameters from cosmic voids was almost infeasible, because the amount of data in void catalogues was not enough to ensure statistically relevant samples. The increasingly wide and deep fields in present and upcoming surveys have made the cosmic voids become promising probes, despite the fact that we are not yet provided with a unique and generally accepted definition for them. In this Thesis we address the two-point statistics of cosmic voids, in the very first attempt to model its features with cosmological purposes. To this end, we implement an improved version of the void power spectrum presented by Chan et al. (2014). We have been able to build up an exceptionally robust method to tackle with the void clustering statistics, by proposing a functional form that is entirely based on first principles. We extract our data from a suite of high-resolution N-body simulations both in the LCDM and alternative modified gravity scenarios. To accurately compare the data to the theory, we calibrate the model by accounting for a free parameter in the void radius that enters the theory of void exclusion. We then constrain the cosmological parameters by means of a Bayesian analysis. As far as the modified gravity effects are limited, our model is a reliable method to constrain the main LCDM parameters. By contrast, it cannot be used to model the void clustering in the presence of stronger modification of gravity. In future works, we will further develop our analysis on the void clustering statistics, by testing our model on large and high-resolution simulations and on real data, also addressing the void clustering in the halo distribution. Finally, we also plan to combine these constraints with those of other cosmological probes.