857 resultados para Complex network analysis. Time varying graph mine (TVG). Slow-wave sleep (SWS). Fault tolerance
Resumo:
Fermentation processes as objects of modelling and high-quality control are characterized with interdependence and time-varying of process variables that lead to non-linear models with a very complex structure. This is why the conventional optimization methods cannot lead to a satisfied solution. As an alternative, genetic algorithms, like the stochastic global optimization method, can be applied to overcome these limitations. The application of genetic algorithms is a precondition for robustness and reaching of a global minimum that makes them eligible and more workable for parameter identification of fermentation models. Different types of genetic algorithms, namely simple, modified and multi-population ones, have been applied and compared for estimation of nonlinear dynamic model parameters of fed-batch cultivation of S. cerevisiae.
Resumo:
This paper looks at potential distribution network stability problems under the Smart Grid scenario. This is to consider distributed energy resources (DERs) e.g. renewable power generations and intelligent loads with power-electronic controlled converters. The background of this topic is introduced and potential problems are defined from conventional power system stability and power electronic system stability theories. Challenges are identified with possible solutions from steady-state limits, small-signal, and large-signal stability indexes and criteria. Parallel computation techniques might be included for simulation or simplification approaches are required for a largescale distribution network analysis.
Resumo:
In this paper we investigate the connection between quantum walks and graph symmetries. We begin by designing an experiment that allows us to analyze the behavior of the quantum walks on the graph without causing the wave function collapse. To achieve this, we base our analysis on the recently introduced quantum Jensen-Shannon divergence. In particular, we show that the quantum Jensen-Shannon divergence between the evolution of two quantum walks with suitably defined initial states is maximum when the graph presents symmetries. Hence, we assign to each pair of nodes of the graph a value of the divergence, and we average over all pairs of nodes to characterize the degree of symmetry possessed by a graph. © 2013 American Physical Society.
Resumo:
Network analysis has emerged as a key technique in communication studies, economics, geography, history and sociology, among others. A fundamental issue is how to identify key nodes in a network, for which purpose a number of centrality measures have been developed. This paper proposes a new parametric family of centrality measures called generalized degree. It is based on the idea that a relationship to a more interconnected node contributes to centrality in a greater extent than a connection to a less central one. Generalized degree improves on degree by redistributing its sum over the network with the consideration of the global structure. Application of the measure is supported by a set of basic properties. A sufficient condition is given for generalized degree to be rank monotonic, excluding counter-intuitive changes in the centrality ranking after certain modifications of the network. The measure has a graph interpretation and can be calculated iteratively. Generalized degree is recommended to apply besides degree since it preserves most favorable attributes of degree, but better reflects the role of the nodes in the network and has an increased ability to distinguish between their importance.
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
This dissertation deals with the nature of the political system in sixteenth-century colonial Spanish America through an analysis of the administration of Viceroy Fernando de Torres y Portugal, Conde del Villar, in Peru (1585–1590). The political conflicts surrounding his government and the accusations of bribery leveled against him and members of his household provide the documentation for a case study in a system in which prestige and authority were defined through a complex network of patronage and personal relationships with the Spanish monarch, the ultimate source of legitimate power. ^ This dissertation is conceptualized using categories presented in Max Weber's theory on the nature of political order and authority in the history of human societies and the definition of the patrimonial system as one in which the power of he king confers legitimacy and authority on the whole political structure. ^ The documentary base for this dissertation is an exceptionally detailed and complete record related to the official administrative review ( visita) ordered by Philip II in 1588 to assess the government of Viceroy Torres y Portugal. Additionally, letters as well as other primary and secondary sources are scattered in repositories on both sides of the Atlantic. ^ The study of this particular case offers an excellent opportunity to gain an understanding of a political order in which jurisdictional boundaries between institutions and authorities were not clearly defined. The legal system operating in the viceroyalty was subordinated to the personal decisions of the king, and order and equilibrium were maintained through the interaction of patronage networks that were reproduced at all levels of the colonial society. ^ The final charges against Viceroy Conde del Villar, as well as their impact on the political career of those involved in the accusations, reveal that situations today understood to constitute bribery had a different meaning in the context of a patrimonial order. ^
Resumo:
Next generation networks are characterized by ever increasing complexity, intelligence, heterogeneous technologies and increasing user expectations. Telecommunication networks in particular have become truly global, consisting of a variety of national and regional networks, both wired and wireless. Consequently, the management of telecommunication networks is becoming increasingly complex. In addition, network security and reliability requirements require additional overheads which increase the size of the data records. This in turn causes acute network traffic congestions. There is no single network management methodology to control the various requirements of today's networks, and provides a good level of Quality of Service (QoS), and network security. Therefore, an integrated approach is needed in which a combination of methodologies can provide solutions and answers to network events (which cause severe congestions and compromise the quality of service and security). The proposed solution focused on a systematic approach to design a network management system based upon the recent advances in the mobile agent technologies. This solution has provided a new traffic management system for telecommunication networks that is capable of (1) reducing the network traffic load (thus reducing traffic congestion), (2) overcoming existing network latency, (3) adapting dynamically to the traffic load of the system, (4) operating in heterogeneous environments with improved security, and (5) having robust and fault tolerance behavior. This solution has solved several key challenges in the development of network management for telecommunication networks using mobile agents. We have designed several types of agents, whose interactions will allow performing some complex management actions, and integrating them. Our solution is decentralized to eliminate excessive bandwidth usage and at the same time has extended the capabilities of the Simple Network Management Protocol (SNMP). Our solution is fully compatible with the existing standards.
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.
Resumo:
The need for efficient, sustainable, and planned utilization of resources is ever more critical. In the U.S. alone, buildings consume 34.8 Quadrillion (1015) BTU of energy annually at a cost of $1.4 Trillion. Of this energy 58% is utilized for heating and air conditioning. ^ Several building energy analysis tools have been developed to assess energy demands and lifecycle energy costs in buildings. Such analyses are also essential for an efficient HVAC design that overcomes the pitfalls of an under/over-designed system. DOE-2 is among the most widely known full building energy analysis models. It also constitutes the simulation engine of other prominent software such as eQUEST, EnergyPro, PowerDOE. Therefore, it is essential that DOE-2 energy simulations be characterized by high accuracy. ^ Infiltration is an uncontrolled process through which outside air leaks into a building. Studies have estimated infiltration to account for up to 50% of a building's energy demand. This, considered alongside the annual cost of buildings energy consumption, reveals the costs of air infiltration. It also stresses the need that prominent building energy simulation engines accurately account for its impact. ^ In this research the relative accuracy of current air infiltration calculation methods is evaluated against an intricate Multiphysics Hygrothermal CFD building envelope analysis. The full-scale CFD analysis is based on a meticulous representation of cracking in building envelopes and on real-life conditions. The research found that even the most advanced current infiltration methods, including in DOE-2, are at up to 96.13% relative error versus CFD analysis. ^ An Enhanced Model for Combined Heat and Air Infiltration Simulation was developed. The model resulted in 91.6% improvement in relative accuracy over current models. It reduces error versus CFD analysis to less than 4.5% while requiring less than 1% of the time required for such a complex hygrothermal analysis. The algorithm used in our model was demonstrated to be easy to integrate into DOE-2 and other engines as a standalone method for evaluating infiltration heat loads. This will vastly increase the accuracy of such simulation engines while maintaining their speed and ease of use characteristics that make them very widely used in building design.^
Resumo:
The purpose of this mixed methods study was to understand physics Learning Assistants' (LAs) views on reflective teaching, expertise in teaching, and LA program teaching experience and to determine if views predicted level of reflection evident in writing. Interviews were conducted in Phase One, Q methodology was used in Phase Two, and level of reflection in participants' writing was assessed using a rubric based on Hatton and Smith's (1995) "Criteria for the Recognition of Evidence for Different Types of Reflective Writing" in Phase Three. Interview analysis revealed varying perspectives on content knowledge, pedagogical knowledge, and experience in relation to expertise in teaching. Participants revealed that they engaged in reflection on their teaching, believed reflection helps teachers improve, and found peer reflection beneficial. Participants believed teaching experience in the LA program provided preparation for teaching, but that more preparation was needed to teach. Three typologies emerged in Phase Two. Type One LAs found participation in the LA program rewarding and believed expertise in teaching does not require expertise in content or pedagogy, but it develops over time from reflection. Type Two LAs valued reflection, but not writing reflections, felt the LA program teaching experience helped them decide on non-teaching careers and helped them confront gaps in their physics knowledge. Type Three LAs valued reflection, believed expertise in content and pedagogy are necessary for expert teaching, and felt LA program teaching experience increased their likelihood of becoming teachers, but did not prepare them for teaching. Writing assignments submitted in Phase Three were categorized as 19% descriptive writing, 60% descriptive reflections, and 21% dialogic reflections. No assignments were categorized as critical reflection. Using ordinal logistic regression, typologies that emerged in Phase Two were not found to be predictors for the level of reflection evident in the writing assignments. In conclusion, viewpoints of physics LAs were revealed, typologies among them were discovered, and their writing gave evidence of their ability to reflect on teaching. These findings may benefit faculty and staff in the LA program by helping them better understand the views of physics LAs and how to assess their various forms of reflection.
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as 1) closure or connectedness within the group, 2) bridging ties which extend outside of the group, and 3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software.
Resumo:
Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.
Resumo:
The plant metabolism consists of a complex network of physical and chemical events resulting in photosynthesis, respiration, synthesis and degradation of organic compounds. This is only possible due to the different kinds of responses to many environmental variations that a plant could be subject through evolution, leading also to conquering new surroundings. The glyoxylate cycle is a metabolic pathway found in glyoxysomes plant, which has unique role in the seedling establishment. Considered as a variation of the citric acid cycle, it uses an acetyl coenzyme A molecule, derived from lipids beta-oxidation to synthesize compounds which are used in carbohydrate synthesis. The Malate synthase (MLS) and Isocitrate lyase (ICL) enzyme of this cycle are unique and essential in regulating the biosynthesis of carbohydrates. Because of the absence of decarboxylation steps as rate-limiting steps, detailed studies of molecular phylogeny and evolution of these proteins enables the elucidation of the effects of this route presence in the evolutionary processes involved in their distribution across the genome from different plant species. Therefore, the aim of this study was to establish a relationship between the molecular evolution of the characteristics of enzymes from the glyoxylate cycle (isocitrate lyase and malate synthase) and their molecular phylogeny, among green plants (Viridiplantae). For this, amino acid and nucleotide sequences were used, from online repositories as UniProt and Genbank. Sequences were aligned and then subjected to an analysis of the best-fit substitution models. The phylogeny was rebuilt by distance methods (neighbor-joining) and discrete methods (maximum likelihood, maximum parsimony and Bayesian analysis). The identification of structural patterns in the evolution of the enzymes was made through homology modeling and structure prediction from protein sequences. Based on comparative analyzes of in silico models and from the results of phylogenetic inferences, both enzymes show significant structure conservation and their topologies in agreement with two processes of selection and specialization of the genes. Thus, confirming the relevance of new studies to elucidate the plant metabolism from an evolutionary perspective
Resumo:
The health paradigm, consolidated in the last century, directed the training of health professionals, educated under the aegis of the Flexnerian training, fragmentary and hospital-centered model. However, it proved to be insufficient to meet the demands of the Unified Health System and the population. In this sense, the National Curriculum Guidelines for Undergraduate health courses emerge as a normative framework in proposing a new professional profile, as well as the recommendation of strategies for the restructuring of curricula and teaching practices, and one of them is the teaching-service integration. Therefore, the aim of this study was to investigate the process of training of Physiotherapy course students of the Federal University of Paraíba with the guiding principle of teaching-service integration, considering DCN. In this sense, the chosen method was a case study with qualitative approach. The sample was intentional, including all faculty members of the permanent staff of the Department of Physiotherapy at UFPB, linked to curriculum components whose practice scenarios occur in the SUS network and time longer than one year in that component. The data collection technique was the semi-structured interview. Data analysis was performed using the content analysis technique. The following categories were considered: professional training for SUS, integration of students to the SUS network services, the relationship between theory and practice in the training of physiotherapists, teaching and health professional partnership in the teaching-learning process and programs of training reorientation and their integration with the course. The results allowed identifying positive points in the teaching-service integration: recognition of the importance of integration activities between university and health services based on the insertion of students in the network, the combined actuation with health service professionals and the opportunity to work in a multidisciplinary team; the existence of structured and organized School Network; participation of students and teachers in government programs that offer the experience of insertion in the labor market. The following weaknesses stood out: difficulties in agreement, planning and evaluation of activities by the service; gap between theoretical and practical activities; lack of definition of roles of teacher and health service professionals in the training process and the fragile relationship of reorientation of vocational training programs with the curricular activities of the course. The teaching-service integration as a guiding principle in the analysis of the formation of physiotherapists reveals limits and possibilities for training that meets the health needs of the population. Thus, the choices of educational institutions regarding the care model have an influence on health practices, as well as the commitment by management and services and the permeability to social control instances decisively contribute to the improvement in the training of future professionals. Thus, the commitment of all involved for the effective change in the training process of health paradigm is indispensable.