950 resultados para Second- and third-order ionospheric effects


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network formation within the BRITE--EURAM program is investigated.Wedescribe the role of the hub of the network, which is defined as the setofmain contractors that account for most of the participations. We studytheeffects that the conflict of objectives within European research fundingbetween pre-competitive research vs. European cohesion has on theformationof networks and on the relationship between different partnersof the network. \\A panel data set is constructed including the second and third frameworkof theBrite--Euram program. A model of joint production of research results isusedto test for changes in the behavior of partners within the twoframeworks. \\The main findings are that participations are very concentrated, that isasmall group of institutions account for most of the participations, butgoingfrom the second to the third framework the presence of subcontractorsand singleparticipants increases substantially. This result is reinforced by the factthat main contractors receive smaller spill-ins within networks, butspill-insincrease from the second to the third framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to verify the viability of exclusive use of elephant grass pollen, Pennisetum purpureum (Schum), to feed larvae of the lacewing Chrysoperla externa (Hagen, 1861). The insects were kept at 24ºC and the duration and survival rate of each instar and the larval and pupal phases were recorded. The diet provided complete development of the larvae. The average duration of the first and second instars was the same (6.9 days), while the third instar lasted an average of 10.0 days and the pupal phase 13.2 days. The average survival of the larvae was above 80% for the first, second and third instars, and 70.0% and 33.3% for the larval and pupal phase, respectively. These results indicate that the exclusive use of elephant grass pollen can provide complete development of the immature stages of this predator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soil chronofunctions are an alternative for the quantification of soil-forming processes and underlie the modeling of soil genesis. To establish soil chronofunctions of a Heilu soil profile on Loess in Luochuan, selected soil properties and the 14C ages in the Holocene were studied. Linear, logarithmic, and third-order polynomial functions were selected to fit the relationships between soil properties and ages. The results indicated that third-order polynomial function fit best for the relationships between clay (< 0.002 mm), silt (0.002-0.02 mm), sand (0.02-2 mm) and soil ages, and a trend of an Ah horizon ocurrence in the profile. The logarithmic function indicated mainly variations of soil organic carbon and pH with time (soil age). The variation in CaCO3 content, Mn/Zr, Fe/Zr, K/Zr, Mg/Zr, Ca/Zr, P/Zr, and Na/Zr ratios with soil age were best described by three-order polynomial functions, in which the trend line showed migration of CaCO3 and some elements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this trial was to estimate the total dry matter (TDMI) and daily pasture dry matter intakes (PDMI) by lactating crossbred Holstein - Zebu cows grazing elephant grass (Pennisetum purpureum Schum.) paddocks submitted to different rest periods. Three groups of 24 cows were used during two years. The paddocks were grazed during three days at the stocking rate of 4.5 cows/ha. Treatments consisted of resting periods of 30 days without concentrate and resting periods of 30, 37.5 and 45 days with 2 kg/cow/day of 20.6% crude protein concentrate. From July to October, pasture was supplemented with chopped sugarcane plus 1% urea. Total daily dry matter intake was estimated using the extrusa in vitro dry matter digestibility and the fecal output with chromium oxide. Regardless of the treatment the estimated average TDMI was 2.7, 2.9 and 2.9±0.03% and the mean PDMI was 1.9, 2.1 and 2.1±0.03% of body weight in the first, second and third grazing day, respectively (P<0.05). Only during the summer pasture quality was the same whichever the grazing day. Sugarcane effectively replaced grazing pasture, mainly in the first day when pasture dry matter intake was lowest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of cancer prevalence with cancer mortality can lead under some hypotheses to an estimate of registration rate. A method is proposed, where the cases with cancer as a cause of death are divided into 3 categories: (1) cases already known by the registry (2) unknown cases having occured before the registry creation date (3) unknown cases occuring during the registry operates. The estimate is then the number of cases in the first category divided by the total of those in categories 1 and 3 (these only are to be registered). An application is performed on the data of the Canton de Vaud. Survival rates of the Norvegian Cancer Registry are used for computing the number of unknown cases to be included in second and third category, respectively. The discussion focusses on the possible determinants of the obtained comprehensiveness rates for various cancer sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this report is to describe the major research activities during the period of February 1, 1985 - October 30, 1986 for the Iowa Highway Research Board under the research contract entitled "Development of a Conductometric Test for Frost Resistance of Concrete." The objective of this research, as stated in the project proposal, is to develop a test method which can be reasonably rapidly performed in the laboratory and in the field to predict the behavior of concrete subjected to the action of alternate freezing and thawing with a high degree of certainty. In the work plan of the proposal it was stated that the early part of the first year would be devoted to construction of testing equipment and preparation of specimens and the remainder of the year would be devoted to the testing of specimens. It was also stated that the second and third years would be devoted to performance and refinements of tests, data analysis, preparation of suggested specifications, and performance of tests covering variables which need to be studied such as types of aggregates, fly ash replacements and other admixtures. The objective of this report is to describe the progress made during the first 20 months of this project and assess the significance of the results obtained thus far and the expected significance of the results obtainable during the third year of the project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes a simulation tool being developed at UPC to predict the microwave nonlinear behavior of planar superconducting structures with very few restrictions on the geometry of the planar layout. The software is intended to be applicable to most structures used in planar HTS circuits, including line, patch, and quasi-lumped microstrip resonators. The tool combines Method of Moments (MoM) algorithms for general electromagnetic simulation with Harmonic Balance algorithms to take into account the nonlinearities in the HTS material. The Method of Moments code is based on discretization of the Electric Field Integral Equation in Rao, Wilton and Glisson Basis Functions. The multilayer dyadic Green's function is used with Sommerfeld integral formulation. The Harmonic Balance algorithm has been adapted to this application where the nonlinearity is distributed and where compatibility with the MoM algorithm is required. Tests of the algorithm in TM010 disk resonators agree with closed-form equations for both the fundamental and third-order intermodulation currents. Simulations of hairpin resonators show good qualitative agreement with previously published results, but it is found that a finer meshing would be necessary to get correct quantitative results. Possible improvements are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Manival near Grenoble (French Prealps) is a very active debris-flow torrent equipped with a large sediment trap (25 000 m3) protecting an urbanized alluvial fan from debris-flows. We began monitoring the sediment budget of the catchment controlled by the trap in Spring 2009. Terrestrial laser scanner is used for monitoring topographic changes in a small gully, the main channel, and the sediment trap. In the main channel, 39 cross-sections are surveyed after every event. Three periods of intense geomorphic activity are documented here. The first was induced by a convective storm in August 2009 which triggered a debris-flow that deposited ~1,800 m3 of sediment in the trap. The debris-flow originated in the upper reach of the main channel and our observations showed that sediment outputs were entirely supplied by channel scouring. Hillslope debris-flows were initiated on talus slopes, as revealed by terrestrial LiDAR resurveys; however they were disconnected to the main channel. The second and third periods of geomorphic activity were induced by long duration and low intensity rainfall events in September and October 2009 which generate small flow events with intense bedload transport. These events contribute to recharge the debris-flow channel with sediments by depositing important gravel dunes propagating from headwaters. The total recharge in the torrent subsequent to bedload transport events was estimated at 34% of the sediment erosion induced by the August debris-flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las experiencias educativas en los centros de Educación Infantil y Primaria de nuestro entorno próximo forman parte de los recursos esenciales para la formación de los futuros maestros. De manera continuada a lo largo de segundo y tercer curso de Magisterio esta realidad constituye el campo de aprendizaje de nuestros estudiantes. En él deberán desarrollar una actividad relacionada con la adquisición del conocimiento teórico en la Universidad. En este marco se reclama a los estudiantes que, además de la observación de los profesionales expertos, sean también docentes diseñando y llevando a término con alumnos de seis a doce años distintas experiencias, que posteriormente deberán analizar y evaluar a partir de distintas estrategias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con base en una selección de 145 datos pertenecientes a ríos de montaña de fuerte pendiente (³ 1%) se han desarrollado cinco expresiones para determinar el factor de fricción de Darcy-Weisbach. La primera expresión se fundamenta en la aplicación para flujo turbulento rugoso en lámina libre de la ley semilogarítmica de Prandtl-Kárman, que es función de la sumersión relativa (relación entre el calado medio y la rugosidad equivalente). La segunda y tercera corresponden a correciones de la primera para flujo macrorrugoso, propuestas por Thompson y Campbell (1979) y Aguirre-Pe y Fuentes (1990) respectivamente. La cuarta ecuación consiste una en potencia de la sumersión relativa, mientras que la quinta corrige la fórmula anterior incorporando una potencia de la pendiente, tal y como propugnan Meunier (1989) y Rickenmann (1990). Las expresiones derivadas presentan un ajuste significativo, si se tienen en cuenta las limitaciones hidrométricas existentes en ríos de material grueso y fuerte pendiente. Destaca el mayor ajuste conseguido con las ecuaciones con las ecuaciones del tipo potencial frente a las del tipo semilogarítmico. Se ha encontrado, asimismo, una capacidad de predicción ligeramente superior en aquellas expresiones que incluyen modificaciones respecto a la ecuación original, ecuaciones del tipo segundo, tercero y quinto anteriormente indicado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä on oletettu että neljännen sukupolven mobiiliverkko on saumaton yhdistelmä olemassa olevia toisen ja kolmannen sukupolven langattomia verkkoja sekä lyhyen kantaman WLAN- ja Bluetooth-radiotekniikoita. Näiden tekniikoiden on myös oletettu olevan niin yhteensopivia ettei käyttäjä havaitse saanti verkon muuttumista. Työ esittelee neljännen sukupolven mobiiliverkkoihin liittyvien tärkeimpien langattomien tekniikoiden arkkitehtuurin ja perustoiminta-periaatteet. Työ kuvaa eri tekniikoita ja käytäntöjä tiedon mittaamiseen ja keräämiseen. Saatuja transaktiomittauksia voidaan käyttää tarjottaessa erilaistettuja palvelutasoja sekä verkko- ja palvelukapasiteetin optimoimisessa. Lisäksi työssä esitellään Internet Business Information Manager joka on ohjelmistokehys hajautetun tiedon keräämiseen. Sen keräämää mittaustietoa voidaan käyttää palvelun tason seurannassa j a raportoinnissa sekä laskutuksessa. Työn käytännön osuudessa piti kehittää langattoman verkon liikennettä seuraava agentti joka tarkkailisi palvelun laatua. Agentti sijaitsisi matkapuhelimessa mitaten verkon liikennettä. Agenttia ei kuitenkaan voitu toteuttaa koska ohjelmistoympäristö todettiin vajaaksi. Joka tapauksessa työ osoitti että käyttäjän näkökulmasta tietoa kerääville agenteille on todellinen tarve.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neljännen sukupolven mobiiliverkot yhdistävät saumattomasti televerkot, Internetin ja niiden palvelut. Alkuperin Internetiä käytettiin vain paikallaan pysyviltä tietokoneilta perinteisten televerkkojen tarjotessa puhelin- ja datapalveluita. Neljännen sukupolven mobiiliverkkojen käyttäjät voivat käyttää sekä Internetiin perustuvia että perinteisten televerkkojen palveluita liikkuessaankin. Tämä diplomityö esittelee neljännen sukupolven mobiiliverkon yleisen arkkitehtuurin. Arkkitehtuurin perusosat kuvaillaan ja arkkitehtuuria verrataan toisen ja kolmannen sukupolven mobiiliverkkoihin. Aiheeseen liittyvät Internet-standardit esitellään ja niiden soveltuvuutta mobiiliverkkoihin pohditaan. Langattomia, lyhyen kantaman nopeita liitäntäverkkotekniikoita esitellään. Neljännen sukupolven mobiiliverkoissa tarvittavia päätelaitteiden ja käyttäjien liikkuvuuden hallintamenetelmiä esitellään. Esitelty arkkitehtuuri perustuu langattomiin, lyhyen kantaman nopeisiin liitäntäverkkotekniikoihin ja Internet-standardeihin. Arkkitehtuuri mahdollistaa yhteydet toisiin käyttäjiin ilman tietoa heidän senhetkisestä päätelaitteesta tai sijainnista. Internetin palveluitavoidaan käyttää missä tahansa neljännen sukupolven mobiiliverkon alueella. Yleiskäytöistä liikkuvuuden hallintamenetelmää yhden verkon alueelle ehdotetaan. Menetelmää voidaan käyttää yhdessä esitellyn arkkitehtuurin kanssa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[SPA] OBJETIVO: El artículo presenta los resultados obtenidos en la investigación que dio origen a la tesis doctoral defendida por la autora en la Universitat de Lleida (España), cuyo objetivo fue identificar las Competencias Profesionales de los nutricionistas que trabajan en el ámbito de la Nutrición Deportiva. MÉTODOS: Fueron investigados 14 expertos provenientes de Australia (n=1), Brasil (n=7), España (n=3) y Estados Unidos (n=3). La herramienta metodológica utilizada fue la técnica Delphi, compuesta de tres rondas de cuestionarios. En la primera ronda los expertos proporcionaron, a través de sus discursos, la identificación de un listado de Competencias Profesionales, información que en la segunda y tercera ronda pudieron ser evaluadas y posteriormente analizadas a través de cálculos estadísticos descriptivos (media, moda, mediana y desviación Standard). RESULTADOS: De esta manera, se llegó al consenso entre los expertos sobre 147 competencias profesionales identificadas. Las competencias fueron clasificadas en cuatro macro categorías de Competencias Profesionales: Competencias Técnicas (38), Metodológicas (62), Participativas (24) y Personales (23). CONCLUSIÓN: Los resultados demostraron que el estudio sistematizado de las Competencias Profesionales del Nutricionista Deportivo contribuye para el establecimiento de los contenidos que deben componer la disciplina de Nutrición Deportiva a ser incorporada en los itinerarios curriculares de las carreras de Nutrición Humana y Dietética.