930 resultados para Network structure
Resumo:
Em Portugal e no Mundo Ocidental a população está a envelhecer, colocando esta nova realidade enormes desafios à sociedade. A sua crescente relevância deve-se sobretudo às consideráveis repercussões a nível pessoal, familiar, sociopolítico e económico e que afetam pessoas de todas as idades e a sociedade como um todo, colocando desafios específicos relativamente às relações interpessoais, à qualidade de vida e à saúde mental na pessoa idosa. Deste modo, o presente trabalho tem como objetivo analisar a associação entre a qualidade de vida, a depressão e as características das redes sociais pessoais dos idosos. Participaram no estudo 317 indivíduos, sendo 202 do sexo feminino e 115 do sexo masculino, com idade igual ou superior a 65 anos, com uma média de 77 anos (DP=7,57). Na recolha de dados recorremos a três instrumentos: Geriatric Depression Scale (GDS Short Form 15), (Yesavage et al., 1983; Almeida & Almeida, 1999); Instrumento de Avaliação de Qualidade de Vida da OMS (WHOQOL), (OMS, 1998; Canavarro et al., 2006); Instrumento de Avaliação das Redes Sociais Pessoais (IARSP – Idosos), (Guadalupe, 2010; Guadalupe & Vicente, 2012). Dos resultados destacamos que as características funcionais da rede social pessoal se diferenciaram de forma estatisticamente significativa entre as subamostras de idosos segundo os níveis de qualidade de vida percebida. Saliente-se que, além da relação significativa entre a depressão e qualidade de vida, em que os idosos com um nível mais baixo de qualidade de vida percebida apresentam maiores níveis de depressão (p<0,001), as características funcionais das redes sociais apresentam uma associação clara com a qualidade de vida (p<0,005) e a maioria com a depressão (p<0,014), o que não acontece com as estruturais e com as relacionais-contextuais. Outros resultados indicam que indivíduos com diferentes níveis de qualidade de vida percebida possuem uma estrutura idêntica da rede social pessoal. Ao nível da análise da associação entre as variáveis funcionais da Rede Social Pessoal, Qualidade de Vida e Depressão, o modelo analítico transmite-nos indicadores de investigação e intervenção precisos, o que demonstra a necessidade da continuidade e aprofundamento do presente estudo num âmbito amostral mais alargado e heterogéneo. / In Portugal and in the eastern world, the aging of population creates huge challenges to societies. It's growing relevance is owed to considerable repercussions on the personal, familiar, socio-politic and economic level that affect people of all ages and society as a whole, creating specific challenges regarding interpersonal relationships, quality of life and mental health of the elderly. The current work has the objective of analyzing the association between quality of life, depression and the characteristics of personal social networks of the elderly. 317 individuals have participated in this study, 202 female and 115 male, with age equal or above 65 years old, with an average of 77 years old (DP=7,57). We used three assessment instruments to collect data: Geriatric Depression Scale (GDS Short Form 15), (Yesavage et al., 1983; Almeida & Almeida, 1999); WHO’s Quality of Life Evaluation instrument (WHOQOL) (WHO, 1998; Canavarro et al., 2006); Personal Social Network Analysis Tool (IARSP-Elderly),(Guadalupe, 2010; Guadalupe & Vicente, 2012). The results show that the functional personal network characteristics are significantly different according to their level of quality of life. It should also be noted that not only there is a significant association between depression and quality of life, in which elderly people with a lower quality of life level show higher levels of depression (p<0,001), there is also a clear association between the functional social network characteristics and quality of life (p<0,005), and the majority with depression (p<0,014), which doesn’t happen with structural and relational-contextual social network characteristics. Other results indicate that different levels of quality of life acquire an identical social network structure. On the matter of association between the functional variables of social networks, quality of fife and depression, the analytic model shows precise indicators of research and intervention, which instills us a need to continue and enlarge this study with an wider and more heterogeneous sample.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Requirement engineering is a key issue in the development of a software project. Like any other development activity it is not without risks. This work is about the empirical study of risks of requirements by applying machine learning techniques, specifically Bayesian networks classifiers. We have defined several models to predict the risk level for a given requirement using three dataset that collect metrics taken from the requirement specifications of different projects. The classification accuracy of the Bayesian models obtained is evaluated and compared using several classification performance measures. The results of the experiments show that the Bayesians networks allow obtaining valid predictors. Specifically, a tree augmented network structure shows a competitive experimental performance in all datasets. Besides, the relations established between the variables collected to determine the level of risk in a requirement, match with those set by requirement engineers. We show that Bayesian networks are valid tools for the automation of risks assessment in requirement engineering.
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2016.
Resumo:
Traditionally, microbial surveys investigating the effect of chronic anthropogenic pressure such as polyaromatic hydrocarbons (PAHs) contaminations consider just the alpha and beta diversity and ignore the interactions among the different taxa forming the microbial community. Here, we investigated the ecological relationships between the three domains of life (i.e., Bacteria, Archaea, and Eukarya) using 454 pyrosequencing on the 16S rRNA and 18S rRNA genes from chronically impacted and pristine sediments, along the coasts of the Mediterranean Sea (Gulf of Lion, Vermillion coast, Corsica, Bizerte lagoon and Lebanon) and the French Atlantic Ocean (Bay of Biscay and English Channel). Our approach provided a robust ecological framework for the partition of the taxa abundance distribution into 859 core Operational taxonomic units (OTUs) and 6629 satellite OTUs. OTUs forming the core microbial community showed the highest sensitivity to changes in environmental and contaminant variations, with salinity, latitude, temperature, particle size distribution, total organic carbon (TOC) and PAH concentrations as main drivers of community assembly. The core communities were dominated by Gammaproteobacteria and Deltaproteobacteria for Bacteria, by Thaumarchaeota, Bathyarchaeota and Thermoplasmata for Archaea and Metazoa and Dinoflagellata for Eukarya. In order to find associations among microorganisms, we generated a co-occurrence network in which PAHs were found to impact significantly the potential predator – prey relationship in one microbial consortium composed of ciliates and Actinobacteria. Comparison of network topological properties between contaminated and non-contaminated samples showed substantial differences in the network structure and indicated a higher vulnerability to environmental perturbations in the contaminated sediments.
Resumo:
With its powerful search engines and billions of published pages, the Worldwide Web has become the ultimate tool to explore the human experience. But, despite the advent of the digital revolution, e-books, at their core, have remained remarkably similar to their printed siblings. This has resulted in a clear dichotomy between two ways of reading: on one side, the multi-dimensional world of the Web; on the other, the linearity of books and e-books. My investigation of the literature indicates that the focus of attempts to merge these two modes of production, and hence of reading, has been the insertion of interactivity into fiction. As I will show in the Literature Review, a clear thrust of research since the early 1990s, and in my opinion the most significant, has concentrated on presenting the reader with choices that affect the plot. This has resulted in interactive stories in which the structure of the narrative can be altered by the reader of experimental fiction. The interest in this area of research is not surprising, as the interaction of readers with the fabric of the narrative provides a fertile ground for exploring, analysing, and discussing issues of plot consistency and continuity. I found in the literature several papers concerned with the effects of hyperlinking on literature, but none about how hyperlinked material and narrative could be integrated without compromising the narrative flow as designed by the author. It led me to think that the researchers had accepted hypertextuality and the linear organisation of fiction as being antithetical, thereby ignoring the possibility of exploiting the first while preserving the second. All the works I consulted were focussed on exploring the possibilities provided to authors (and readers) by hypertext or how hypertext literature affects literary criticism. This was true in earlier works by Landow and Harpold and remained true in later works by Bolter and Grusin. To quote another example, in his book Hypertext 3.0, Landow states: “Most who have speculated on the relation between hypertextuality and fiction concentrate [...] on the effects it will have on linear narrative”, and “hypertext opens major questions about story and plot by apparently doing away with linear organization” (Landow, 2006, pp. 220, 221). In other words, the authors have added narrative elements to Web pages, effectively placing their stories in a subordinate role. By focussing on “opening up” the plots, the researchers have missed the opportunity to maintain the integrity of their stories and use hyperlinked information to provide interactive access to backstory and factual bases. This would represent a missing link between the traditional way of reading, in which the readers have no influence on the path the author has laid out for them, and interactive narrative, in which the readers choose their way across alternatives, thereby, at least to a certain extent, creating their own path. It would be, to continue the metaphor, as if the readers could follow the main path created by the author while being able to get “sidetracked” into exploring hyperlinked material. In Hypertext 3.0, Landow refers to an “Axial structure [of hypertext] characteristic of electronic books and scholarly books with foot-and endnotes” versus a “Network structure of hypertext” (Landow, 2006, p. 70). My research aims at generalising the axial structure and extending it to fiction without losing the linearity at its core. In creative nonfiction, the introduction of places, scenes, and settings, together with characterisation, brings to life the facts without altering them; while much fiction draws on facts to provide a foundation, or narrative elements, for the work. But how can the reader distinguish between facts and representations? For example, to what extent do dialogues and perceptions present what was actually said and thought? Some authors of creative nonfiction use end-notes to provide comments and citations while minimising disruption the flow of the main text, but they are limited in scope and constrained in space. Each reader should be able to enjoy the narrative as if it were a novel but also to explore the facts at the level of detail s/he needs. For this to be possible, end-notes should provide a Web-like way of exploring in more detail what the author has already researched. My research aims to develop ways of integrating narrative prose and hyperlinked documents into a Hyperbook. Its goal is to create a new writing paradigm in which a story incorporates a gateway to detailed information. While creative nonfiction uses the techniques of fictional writing to provide reportage of actual events and fact-based fiction illuminates the affectual dimensions of what happened (e.g., Kate Grenville’s The Secret River and Hilary Mantel’s Wolf Hall), Hyperbooks go one step further and link narrative prose to the details of the events on which the narrative is based or, more in general, to information the reader might find of interest. My dissertation introduces and utilises Hyperbooks to engage in two parallel types of investigation Build knowledge about Italian WWII POWs held in Australia and present it as part of a novella in Hyperbook format. Develop a new piece of technology capable of extending the writing and reading process.
Resumo:
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.
Resumo:
Il presente elaborato indaga l’evoluzione del rapporto tra ordinamento giuridico e territorio, alla luce dei processi di globalizzazione e integrazione europea. Nel secondo dopoguerra, infatti, si è assistito ad un’evoluzione dello Stato moderno in quello che è stato definito come lo Stato keynesiano, nel quale convivevano una forte presenza pubblica nell’economia interna e un basso livello di internazionalizzazione del commercio mondiale. La crisi di tale modello è la crisi dello Stato territoriale, che viene attraversato da nuovi flussi economici che premiano città e regioni, facendo perdere di importanza alla dimensione nazionale. Ciò è avvolorato dal processo di integrazione europea che dagli anni ’80 in poi trova nuovo vigore e comincia a limitare l’intervento pubblico nell’economia agendo sia sul piano degli aiuti di Stato, che sui bilanci nazionali. Tali dinamiche producono la crisi dell’unità dello Stato sotto un aspetto giuridico-territoriale, per via del crescente ruolo tanto delle istituzioni europee, da una parte, e di città e Regioni, dall’altra, le quali diventano nuove dimensioni normative che sfidano la sovranità statale. Allo stesso tempo, si indebolisce anche l’unità sociale, con divari territoriali crescenti, sia a livello inter- che intra- regionali. In questo contesto, si approfondirà come l’intervento pubblico europeo sia sul piano della coesione, che su quello relativo agli aiuti di Stato non solo tenda ad una riduzione dei divari, ma anche ad una riconfigurazione del territorio europeo. Infatti, grazie ai suoi strumenti, città e regioni hanno la possibilità di superare i propri confini amministrativi al fine di creare nuove forme di cooperazione territoriale. In questo scenario, si proporrà una riflessione sulla possibilità di un rinnovato principio di sussidiarietà, che tenga conto della struttura reticolare dell’attuale contesto territoriale europeo, così come degli attuali rapporti tra la dimensione del mercato e quella sociale, al fine di meglio descrivere un ordinamento europeo in senso materiale.
Resumo:
Software Defined Networking along with Network Function Virtualisation have brought an evolution in the telecommunications laying out the bases for 5G networks and its softwarisation. The separation between the data plane and the control plane, along with having a decentralisation of the latter, have allowed to have a better scalability and reliability while reducing the latency. A lot of effort has been put into creating a distributed controller, but most of the solutions provided by now have a monolithic approach that reduces the benefits of having a software defined network. Disaggregating the controller and handling it as microservices is the solution to problems faced when working with a monolithic approach. Microservices enable the cloud native approach which is essential to benefit from the architecture of the 5G Core defined by the 3GPP standards development organisation. Applying the concept of NFV allows to have a softwarised version of the entire network structure. The expectation is that the 5G Core will be deployed on an orchestrated cloud infrastructure and in this thesis work we aim to provide an application of this concept by using Kubernetes as an implementation of the MANO standard. This means Kubernetes acts as a Network Function Virtualisation Orchestrator (NFVO), Virtualised Network Function Manager (VNFM) and Virtualised Infrastructure Manager (VIM) rather than just a Network Function Virtualisation Infrastructure. While OSM has been adopted for this purpose in various scenarios, this work proposes Kubernetes opposed to OSM as the MANO standard implementation.
Resumo:
A hydrophobic cuticle is deposited at the outermost extracellular matrix of the epidermis in primary tissues of terrestrial plants. Besides forming a protective shield against the environment, the cuticle is potentially involved in several developmental processes during plant growth. A high degree of variation in cuticle composition and structure exists between different plant species and tissues. Lots of progress has been made recently in understanding the different steps of biosynthesis, transport, and deposition of cuticular components. However, the molecular mechanisms that underlie cuticular function remain largely elusive.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.