959 resultados para Polynomial distributed lag models
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-fortime substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
Studies reveal that in recent decades a decrease in sleep duration has occurred. Social commitments, such as work and school are often not aligned to the "biological time" of individuals. Added to this, there is a reduced force of zeitgeber caused by less exposure to daylight and larger exposure to evenings. This causes a chronic sleep debt that is offset in a free days. Indeed, a restriction and extent of sleep called "social Jet lag" occurs weekly. Sleep deprivation has been associated to obesity, cancer, and cardiovascular risk. It is suggested that the autonomic nervous system is a pathway that connects sleep problems to cardiovascular diseases. However, beyond the evidence demonstrated by studies using models of acute and controlled sleep deprivation, studies are needed to investigate the effects of chronic sleep deprivation as it occurs in the social jet lag. The aim of this study was to investigate the influence of social jet lag in circadian rest-activity markers and heart function in medical students. It is a cross-sectional, observational study conducted in the Laboratory of Neurobiology and Biological Rhythmicity (LNRB) at the Department of Physiology UFRN. Participated in the survey medical students enrolled in the 1st semester of their course at UFRN. Instruments for data collection: Munich Chronotype Questionnaire, Morningness Eveningness Questionnaire of Horne and Östberg, Pittsburgh Sleep Quality Index, Epworth Sleepiness Scale, Actimeter; Heart rate monitor. Analysed were descriptive variables of sleep, nonparametric (IV60, IS60, L5 and M10) and cardiac indexes of time domain, frequency (LF, HF LF / HF) and nonlinear (SD1, SD2, SD1 / SD2). Descriptive, comparative and correlative statistical analysis was performed with SPSS software version 20. 41 students participated in the study, 48.8% (20) females and 51.2% (21) males, 19.63 ± 2.07 years. The social jet lag had an average of 02: 39h ± 00:55h, 82.9% (34) with social jet lag ≥ 1h and there was a negative correlation with the Munich chronotype score indicating greater sleep deprivation in subjects prone to eveningness. Poor sleep quality was detected in 90.2% (37) (X2 = 26.56, p <0.001) and 56.1% (23) excessive daytime sleepiness (X2 = 0.61, p = 0.435). Significant differences were observed in the values of LFnu, HFnu and LF / HF between the groups of social jet lag <2h and ≥ 2h and correlation of the social jet lag with LFnu (rs = 0.354, p = 0.023), HFnu (rs = - 0.354 , p = 0.023) and LF / HF (r = 0.355, p = 0.023). There was also a negative association between IV60 and indexes in the time domain and non-linear. It is suggested that chronic sleep deprivation may be associated with increased sympathetic activation promoting greater cardiovascular risk.
Resumo:
Peer reviewed
Resumo:
Distributed Computing frameworks belong to a class of programming models that allow developers to
launch workloads on large clusters of machines. Due to the dramatic increase in the volume of
data gathered by ubiquitous computing devices, data analytic workloads have become a common
case among distributed computing applications, making Data Science an entire field of
Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,
a sequence of operations they wish to apply on this dataset, and some constraint they may have
related to their work (performances, QoS, budget, etc). However, it is actually extremely
difficult, without domain expertise, to perform data science. One need to select the right amount
and type of resources, pick up a framework, and configure it. Also, users are often running their
application in shared environments, ruled by schedulers expecting them to specify precisely their resource
needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and
profiling are hard, high dimensional problems that block users from making the right
configuration choices and determining the right amount of resources they need. Paradoxically, the
system is gathering a large amount of monitoring data at runtime, which remains unused.
In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit
monitoring data to learn about workloads, and process user requests into a tailored execution
context. In this work, we study different techniques that have been used to make steps toward
such system awareness, and explore a new way to do so by implementing machine learning
techniques to recommend a specific subset of system configurations for Apache Spark applications.
Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight
the complexity in choosing the best one for a given workload.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
The study of gene × environment, as well as epistatic interactions in schizophrenia, has provided important insight into the complex etiopathologic basis of schizophrenia. It has also increased our understanding of the role of susceptibility genes in the disorder and is an important consideration as we seek to translate genetic advances into novel antipsychotic treatment targets. This review summarises data arising from research involving the modelling of gene × environment interactions in schizophrenia using preclinical genetic models. Evidence for synergistic effects on the expression of schizophrenia-relevant endophenotypes will be discussed. It is proposed that valid and multifactorial preclinical models are important tools for identifying critical areas, as well as underlying mechanisms, of convergence of genetic and environmental risk factors, and their interaction in schizophrenia.
Resumo:
BACKGROUND: The role of the microbiome has become synonymous with human health and disease. Bile acids, as essential components of the microbiome, have gained sustained credibility as potential modulators of cancer progression in several disease models. At physiological concentrations, bile acids appear to influence cancer phenotypes, although conflicting data surrounds their precise physiological mechanism of action. Previously, we demonstrated bile acids destabilised the HIF-1α subunit of the Hypoxic-Inducible Factor-1 (HIF-1) transcription factor. HIF-1 overexpression is an early biomarker of tumour metastasis and is associated with tumour resistance to conventional therapies, and poor prognosis in a range of different cancers. METHODS: Here we investigated the effects of bile acids on the cancer growth and migratory potential of cell lines where HIF-1α is known to be active under hypoxic conditions. HIF-1α status was investigated in A-549 lung, DU-145 prostate and MCF-7 breast cancer cell lines exposed to bile acids (CDCA and DCA). Cell adhesion, invasion, migration was assessed in DU-145 cells while clonogenic growth was assessed in all cell lines. RESULTS: Intracellular HIF-1α was destabilised in the presence of bile acids in all cell lines tested. Bile acids were not cytotoxic but exhibited greatly reduced clonogenic potential in two out of three cell lines. In the migratory prostate cancer cell line DU-145, bile acids impaired cell adhesion, migration and invasion. CDCA and DCA destabilised HIF-1α in all cells and significantly suppressed key cancer progression associated phenotypes; clonogenic growth, invasion and migration in DU-145 cells. CONCLUSIONS: These findings suggest previously unobserved roles for bile acids as physiologically relevant molecules targeting hypoxic tumour progression.
Resumo:
The real-time optimization of large-scale systems is a difficult problem due to the need for complex models involving uncertain parameters and the high computational cost of solving such problems by a decentralized approach. Extremum-seeking control (ESC) is a model-free real-time optimization technique which can estimate unknown parameters and can optimize nonlinear time-varying systems using only a measurement of the cost function to be minimized. In this thesis, we develop a distributed version of extremum-seeking control which allows large-scale systems to be optimized without models and with minimal computing power. First, we develop a continuous-time distributed extremum-seeking controller. It has three main components: consensus, parameter estimation, and optimization. The consensus provides each local controller with an estimate of the cost to be minimized, allowing them to coordinate their actions. Using this cost estimate, parameters for a local input-output model are estimated, and the cost is minimized by following a gradient descent based on the estimate of the gradient. Next, a similar distributed extremum-seeking controller is developed in discrete-time. Finally, we consider an interesting application of distributed ESC: formation control of high-altitude balloons for high-speed wireless internet. These balloons must be steered into a favourable formation where they are spread out over the Earth and provide coverage to the entire planet. Distributed ESC is applied to this problem, and is shown to be effective for a system of 1200 ballons subjected to realistic wind currents. The approach does not require a wind model and uses a cost function based on a Voronoi partition of the sphere. Distributed ESC is able to steer balloons from a few initial launch sites into a formation which provides coverage to the entire Earth and can maintain a similar formation as the balloons move with the wind around the Earth.
Resumo:
Impactive contact between a vibrating string and a barrier is a strongly nonlinear phenomenon that presents several challenges in the design of numerical models for simulation and sound synthesis of musical string instruments. These are addressed here by applying Hamiltonian methods to incorporate distributed contact forces into a modal framework for discrete-time simulation of the dynamics of a stiff, damped string. The resulting algorithms have spectral accuracy, are unconditionally stable, and require solving a multivariate nonlinear equation that is guaranteed to have a unique solution. Exemplifying results are presented and discussed in terms of accuracy, convergence, and spurious high-frequency oscillations.
Resumo:
Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.
Resumo:
Objective Leadership is particularly important in complex highly interprofessional health care contexts involving a number of staff, some from the same specialty (intraprofessional), and others from different specialties (interprofessional). The authors recently published the concept of “The Burns Suite” (TBS) as a novel simulation tool to deliver interprofessional and teamwork training. It is unclear which leadership behaviors are the most important in an interprofessional burns resuscitation scenario, and whether they can be modeled on to current leadership theory. The purpose of this study was to perform a comprehensive video analysis of leadership behaviors within TBS. Methods A total of 3 burns resuscitation simulations within TBS were recorded. The video analysis was grounded-theory inspired. Using predefined criteria, actions/interactions deemed as leadership behaviors were identified. Using an inductive iterative process, 8 main leadership behaviors were identified. Cohen’s κ coefficient was used to measure inter-rater agreement and calculated as κ = 0.7 (substantial agreement). Each video was watched 4 times, focusing on 1 of the 4 team members per viewing (senior surgeon, senior nurse, trainee surgeon, and trainee nurse). The frequency and types of leadership behavior of each of the 4 team members were recorded. Statistical significance to assess any differences was assessed using analysis of variance, whereby a p < 0.05 was taken to be significant. Leadership behaviors were triangulated with verbal cues and actions from the videos. Results All 3 scenarios were successfully completed. The mean scenario length was 22 minutes. A total of 362 leadership behaviors were recorded from the 12 participants. The most evident leadership behaviors of all team members were adhering to guidelines (which effectively equates to following Advanced Trauma and Life Support/Emergency Management of Severe Burns resuscitation guidelines and hence “maintaining standards”), followed by making decisions. Although in terms of total frequency the senior surgeon engaged in more leadership behaviors compared with the entire team, statistically there was no significant difference between all 4 members within the 8 leadership categories. This analysis highlights that “distributed leadership” was predominant, whereby leadership was “distributed” or “shared” among team members. The leadership behaviors within TBS also seemed to fall in line with the “direction, alignment, and commitment” ontology. Conclusions Effective leadership is essential for successful functioning of work teams and accomplishment of task goals. As the resuscitation of a patient with major burns is a dynamic event, team leaders require flexibility in their leadership behaviors to effectively adapt to changing situations. Understanding leadership behaviors of different team members within an authentic simulation can identify important behaviors required to optimize nontechnical skills in a major resuscitation. Furthermore, attempting to map these behaviors on to leadership models can help further our understanding of leadership theory. Collectively this can aid the development of refined simulation scenarios for team members, and can be extrapolated into other areas of simulation-based team training and interprofessional education.
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo: