929 resultados para Multi-model inference
Resumo:
This paper presents a new multi-model technique of dentification in ANFIS for nonlinear systems. In this technique, the structure used is of the fuzzy Takagi-Sugeno of which the consequences are local linear models that represent the system of different points of operation and the precursors are membership functions whose adjustments are realized by the learning phase of the neuro-fuzzy ANFIS technique. The models that represent the system at different points of the operation can be found with linearization techniques like, for example, the Least Squares method that is robust against sounds and of simple application. The fuzzy system is responsible for informing the proportion of each model that should be utilized, using the membership functions. The membership functions can be adjusted by ANFIS with the use of neural network algorithms, like the back propagation error type, in such a way that the models found for each area are correctly interpolated and define an action of each model for possible entries into the system. In multi-models, the definition of action of models is known as metrics and, since this paper is based on ANFIS, it shall be denominated in ANFIS metrics. This way, ANFIS metrics is utilized to interpolate various models, composing a system to be identified. Differing from the traditional ANFIS, the created technique necessarily represents the system in various well defined regions by unaltered models whose pondered activation as per the membership functions. The selection of regions for the application of the Least Squares method is realized manually from the graphic analysis of the system behavior or from the physical characteristics of the plant. This selection serves as a base to initiate the linear model defining technique and generating the initial configuration of the membership functions. The experiments are conducted in a teaching tank, with multiple sections, designed and created to show the characteristics of the technique. The results from this tank illustrate the performance reached by the technique in task of identifying, utilizing configurations of ANFIS, comparing the developed technique with various models of simple metrics and comparing with the NNARX technique, also adapted to identification
Resumo:
Pós-graduação em Genética e Melhoramento Animal - FCAV
Resumo:
Lemonte and Cordeiro [Birnbaum-Saunders nonlinear regression models, Comput. Stat. Data Anal. 53 (2009), pp. 4441-4452] introduced a class of Birnbaum-Saunders (BS) nonlinear regression models potentially useful in lifetime data analysis. We give a general matrix Bartlett correction formula to improve the likelihood ratio (LR) tests in these models. The formula is simple enough to be used analytically to obtain several closed-form expressions in special cases. Our results generalize those in Lemonte et al. [Improved likelihood inference in Birnbaum-Saunders regressions, Comput. Stat. DataAnal. 54 (2010), pp. 1307-1316], which hold only for the BS linear regression models. We consider Monte Carlo simulations to show that the corrected tests work better than the usual LR tests.
Resumo:
The aim of this PhD thesis, developed in the framework of the Italian Agroscenari research project, is to compare current irrigation volumes in two study area in Emilia-Romagna with the likely irrigation under climate change conditions. This comparison was carried out between the reference period 1961-1990, as defined by WMO, and the 2021-2050 period. For this period, multi-model climatic projections on the two study areas were available. So, the climatic projections were analyzed in term of their impact on irrigation demand and adaptation strategies for fruit and horticultural crops in the study area of Faenza, with a detailed analysis for kiwifruit vine, and for horticultural crops in Piacenza plan, focusing on the irrigation water needs of tomato. We produced downscaled climatic projections (based on A1B Ipcc emission scenario) for the two study areas. The climate change impacts for the period 2021-2050 on crop irrigation water needs and other agrometeorological index were assessed by means of the Criteria water balance model, in the two versions available, Criteria BdP (local) and Geo (spatial) with different levels of detail. We found in general for both the areas an irrigation demand increase of about +10% comparing the 2021-2050 period with the reference years 1961-1990, but no substantial differences with more recent years (1991-2008), mainly due to a projected increase in spring precipitation compensating the projected higher summer temperature and evapotranspiration. As a consequence, it is not forecasted a dramatic increase in the irrigation volumes with respect to the current volumes.
Resumo:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
Resumo:
Il raffreddamento stratosferico associato alla riduzione dell’ozono nelle regioni polari induce un rafforzamento dei venti occidentali nella bassa stratosfera, uno spostamento verso il polo e un’intensificazione del jet troposferico delle medie latitudini. Si riscontra una proiezione di questi cambiamenti a lungo termine sulla polarità ad alto indice di un modo di variabilità climatica, il Southern Annular Mode, alla superficie, dove i venti occidentali alle medie latitudini guidano la Corrente Circumpolare Antartica influenzando la circolazione oceanica meridionale e probabilmente l’estensione del ghiaccio marino ed i flussi di carbonio aria-mare nell’Oceano Meridionale. Una limitata rappresentazione dei processi stratosferici nei modelli climatici per la simulazione del passato e la previsione dei cambiamenti climatici futuri, sembrerebbe portare ad un errore nella rappresentazione dei cambiamenti troposferici a lungo termine nelle rispettive simulazioni. In questa tesi viene condotta un’analisi multi-model mettendo insieme i dati di output derivati da diverse simulazioni di modelli climatici accoppiati oceano-atmosfera, che partecipano al progetto CMIP5, con l'obiettivo di comprendere come le diverse rappresentazioni della dinamica stratosferica possano portare ad una differente rappresentazione dei cambiamenti climatici alla superficie. Vengono utilizzati modelli “High Top” (HT), che hanno una buona rappresentazione della dinamica stratosferica, e modelli “Low Top” (LT), che invece non ne hanno. I risultati vengono confrontati con le reanalisi meteorologiche globali disponibili (ERA-40). Viene mostrato come la rappresentazione e l’intensità del raffreddamento radiativo iniziale e di quello dinamico nella bassa stratosfera, nei modelli, siano i fattori chiave che controllano la successiva risposta troposferica, e come il raffreddamento stesso dipenda dalla rappresentazione della dinamica stratosferica. Si cerca inoltre di differenziare i modelli in base alla loro rappresentazione del raffreddamento radiativo e dinamico nella bassa stratosfera e alla risposta del jet troposferico. Nei modelli, si riscontra che il trend del jet nell'intera troposfera è significativamente correlato linearmente al raffreddamento stesso della bassa stratosfera.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
Questionnaire data may contain missing values because certain questions do not apply to all respondents. For instance, questions addressing particular attributes of a symptom, such as frequency, triggers or seasonality, are only applicable to those who have experienced the symptom, while for those who have not, responses to these items will be missing. This missing information does not fall into the category 'missing by design', rather the features of interest do not exist and cannot be measured regardless of survey design. Analysis of responses to such conditional items is therefore typically restricted to the subpopulation in which they apply. This article is concerned with joint multivariate modelling of responses to both unconditional and conditional items without restricting the analysis to this subpopulation. Such an approach is of interest when the distributions of both types of responses are thought to be determined by common parameters affecting the whole population. By integrating the conditional item structure into the model, inference can be based both on unconditional data from the entire population and on conditional data from subjects for whom they exist. This approach opens new possibilities for multivariate analysis of such data. We apply this approach to latent class modelling and provide an example using data on respiratory symptoms (wheeze and cough) in children. Conditional data structures such as that considered here are common in medical research settings and, although our focus is on latent class models, the approach can be applied to other multivariate models.
Resumo:
For the detection of climate change, not only the magnitude of a trend signal is of significance. An essential issue is the time period required by the trend to be detectable in the first place. An illustrative measure for this is time of emergence (ToE), that is, the point in time when a signal finally emerges from the background noise of natural variability. We investigate the ToE of trend signals in different biogeochemical and physical surface variables utilizing a multi-model ensemble comprising simulations of 17 Earth system models (ESMs). We find that signals in ocean biogeochemical variables emerge on much shorter timescales than the physical variable sea surface temperature (SST). The ToE patterns of pCO2 and pH are spatially very similar to DIC (dissolved inorganic carbon), yet the trends emerge much faster – after roughly 12 yr for the majority of the global ocean area, compared to between 10 and 30 yr for DIC. ToE of 45–90 yr are even larger for SST. In general, the background noise is of higher importance in determining ToE than the strength of the trend signal. In areas with high natural variability, even strong trends both in the physical climate and carbon cycle system are masked by variability over decadal timescales. In contrast to the trend, natural variability is affected by the seasonal cycle. This has important implications for observations, since it implies that intra-annual variability could question the representativeness of irregularly sampled seasonal measurements for the entire year and, thus, the interpretation of observed trends.
Resumo:
Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (high-top') and models that do not (low-top'). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (December-March) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño/Southern Oscillation (ENSO) and the Quasi-Biennial Oscillation (QBO)) and how they relate to predictive skill on intraseasonal to seasonal time-scales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.
Resumo:
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.
Resumo:
La compréhension du discours, et son évolution au cours du vieillissement, constitue un sujet d’une grande importance par sa complexité et sa place dans la préservation de la qualité de vie des aînés. Les objectifs de cette thèse étaient d’évaluer l’influence du vieillissement et du niveau de scolarité sur les capacités de compréhension du discours et sur l’activité cérébrale s’y rattachant. Pour ce faire, trois groupes (jeunes adultes ayant un niveau universitaire de scolarité, personnes âgées ayant un niveau universitaire de scolarité et personnes âgées ayant un niveau secondaire de scolarité) ont réalisé une tâche où ils devaient lire de courtes histoires, puis estimer la véracité d’une affirmation concernant cette histoire. Les capacités de compréhension correspondant aux traitements de trois niveaux du modèle de construction-intégration de Kintsch (la microstructure, la macrostructure et le modèle de situation) ont été évaluées. L’imagerie optique (NIRS) a permis d’estimer les variations d’oxyhémoglobine (HbO) et de déoxyhémoglobine (HbR) tout au long de la tâche. Les résultats ont démontré que les personnes âgées étaient aussi aptes que les plus jeunes pour rappeler la macrostructure (essentiel du texte), mais qu’ils avaient plus de difficulté à rappeler la microstructure (détails) et le modèle de situation (inférence et intégration) suite à la lecture de courts textes. Lors de la lecture, les participants plus âgés ont également montré une plus grande activité cérébrale dans le cortex préfrontal dorsolatéral gauche, ce qui pourrait être un mécanisme de compensation tel que décrit dans le modèle CRUNCH. Aucune différence significative n’a été observée lors de la comparaison des participants âgés ayant un niveau universitaire de scolarité et ceux ayant un niveau secondaire, tant au niveau des capacités de compréhension que de l’activité cérébrale s’y rattachant. Les deux groupes ont cependant des habitudes de vie stimulant la cognition, entre autres, de bonnes habitudes de lecture. Ainsi, ces habitudes semblent avoir une plus grande influence que l’éducation sur les performances en compréhension et sur l’activité cérébrale sous-jacente. Il se pourrait donc que l’éducation influence la cognition en promouvant des habitudes favorisant les activités cognitives, et que ce soit ces habitudes qui aient en bout ligne un réel impact sur le vieillissement cognitif.
Resumo:
La compréhension du discours, et son évolution au cours du vieillissement, constitue un sujet d’une grande importance par sa complexité et sa place dans la préservation de la qualité de vie des aînés. Les objectifs de cette thèse étaient d’évaluer l’influence du vieillissement et du niveau de scolarité sur les capacités de compréhension du discours et sur l’activité cérébrale s’y rattachant. Pour ce faire, trois groupes (jeunes adultes ayant un niveau universitaire de scolarité, personnes âgées ayant un niveau universitaire de scolarité et personnes âgées ayant un niveau secondaire de scolarité) ont réalisé une tâche où ils devaient lire de courtes histoires, puis estimer la véracité d’une affirmation concernant cette histoire. Les capacités de compréhension correspondant aux traitements de trois niveaux du modèle de construction-intégration de Kintsch (la microstructure, la macrostructure et le modèle de situation) ont été évaluées. L’imagerie optique (NIRS) a permis d’estimer les variations d’oxyhémoglobine (HbO) et de déoxyhémoglobine (HbR) tout au long de la tâche. Les résultats ont démontré que les personnes âgées étaient aussi aptes que les plus jeunes pour rappeler la macrostructure (essentiel du texte), mais qu’ils avaient plus de difficulté à rappeler la microstructure (détails) et le modèle de situation (inférence et intégration) suite à la lecture de courts textes. Lors de la lecture, les participants plus âgés ont également montré une plus grande activité cérébrale dans le cortex préfrontal dorsolatéral gauche, ce qui pourrait être un mécanisme de compensation tel que décrit dans le modèle CRUNCH. Aucune différence significative n’a été observée lors de la comparaison des participants âgés ayant un niveau universitaire de scolarité et ceux ayant un niveau secondaire, tant au niveau des capacités de compréhension que de l’activité cérébrale s’y rattachant. Les deux groupes ont cependant des habitudes de vie stimulant la cognition, entre autres, de bonnes habitudes de lecture. Ainsi, ces habitudes semblent avoir une plus grande influence que l’éducation sur les performances en compréhension et sur l’activité cérébrale sous-jacente. Il se pourrait donc que l’éducation influence la cognition en promouvant des habitudes favorisant les activités cognitives, et que ce soit ces habitudes qui aient en bout ligne un réel impact sur le vieillissement cognitif.
Resumo:
Observing, modelling and understanding the climate-scale variability of the deep water formation (DWF) in the North-Western Mediterranean Sea remains today very challenging. In this study, we first characterize the interannual variability of this phenomenon by a thorough reanalysis of observations in order to establish reference time series. These quantitative indicators include 31 observed years for the yearly maximum mixed layer depth over the period 1980–2013 and a detailed multi-indicator description of the period 2007–2013. Then a 1980–2013 hindcast simulation is performed with a fully-coupled regional climate system model including the high-resolution representation of the regional atmosphere, ocean, land-surface and rivers. The simulation reproduces quantitatively well the mean behaviour and the large interannual variability of the DWF phenomenon. The model shows convection deeper than 1000 m in 2/3 of the modelled winters, a mean DWF rate equal to 0.35 Sv with maximum values of 1.7 (resp. 1.6) Sv in 2013 (resp. 2005). Using the model results, the winter-integrated buoyancy loss over the Gulf of Lions is identified as the primary driving factor of the DWF interannual variability and explains, alone, around 50 % of its variance. It is itself explained by the occurrence of few stormy days during winter. At daily scale, the Atlantic ridge weather regime is identified as favourable to strong buoyancy losses and therefore DWF, whereas the positive phase of the North Atlantic oscillation is unfavourable. The driving role of the vertical stratification in autumn, a measure of the water column inhibition to mixing, has also been analyzed. Combining both driving factors allows to explain more than 70 % of the interannual variance of the phenomenon and in particular the occurrence of the five strongest convective years of the model (1981, 1999, 2005, 2009, 2013). The model simulates qualitatively well the trends in the deep waters (warming, saltening, increase in the dense water volume, increase in the bottom water density) despite an underestimation of the salinity and density trends. These deep trends come from a heat and salt accumulation during the 1980s and the 1990s in the surface and intermediate layers of the Gulf of Lions before being transferred stepwise towards the deep layers when very convective years occur in 1999 and later. The salinity increase in the near Atlantic Ocean surface layers seems to be the external forcing that finally leads to these deep trends. In the future, our results may allow to better understand the behaviour of the DWF phenomenon in Mediterranean Sea simulations in hindcast, forecast, reanalysis or future climate change scenario modes. The robustness of the obtained results must be however confirmed in multi-model studies.
Resumo:
Part 13: Virtual Reality and Simulation