943 resultados para Complex systems prediction
Resumo:
Agricultural and forest productive diversification depends on multiple socioeconomic drivers—like knowledge, migration, productive capacity, and market—that shape productive strategies and influence their ecological impacts. Our comparison of indigenous and settlers allows a better understanding of how societies develop different diversification strategies in similar ecological contexts and how the related socioeconomic aspects of diversification are associated with land cover change. Our results suggest that although indigenous people cause less deforestation and diversify more, diversification is not a direct driver of deforestation reduction. A multidimensional approach linking sociocognitive, economic, and ecological patterns of diversification helps explain this contradiction.
Resumo:
Modeling of future water systems at the regional scale is a difficult task due to the complexity of current structures (multiple competing water uses, multiple actors, formal and informal rules) both temporally and spatially. Representing this complexity in the modeling process is a challenge that can be addressed by an interdisciplinary and holistic approach. The assessment of the water system of the Crans-Montana-Sierre area (Switzerland) and its evolution until 2050 were tackled by combining glaciological, hydrogeological, and hydrological measurements and modeling with the evaluation of water use through documentary, statistical and interview-based analyses. Four visions of future regional development were co-produced with a group of stakeholders and were then used as a basis for estimating future water demand. The comparison of the available water resource and the water demand at monthly time scale allowed us to conclude that for the four scenarios socioeconomic factors will impact on the future water systems more than climatic factors. An analysis of the sustainability of the current and future water systems based on four visions of regional development allowed us to identify those scenarios that will be more sustainable and that should be adopted by the decision-makers. The results were then presented to the stakeholders through five key messages. The challenges of communicating the results in such a way with stakeholders are discussed at the end of the article.
Resumo:
Long Term Evolution (LTE) represents the fourth generation (4G) technology which is capable of providing high data rates as well as support of high speed mobility. The EU FP7 Mobile Cloud Networking (MCN) project integrates the use of cloud computing concepts in LTE mobile networks in order to increase LTE's performance. In this way a shared distributed virtualized LTE mobile network is built that can optimize the utilization of virtualized computing, storage and network resources and minimize communication delays. Two important features that can be used in such a virtualized system to improve its performance are the user mobility and bandwidth prediction. This paper introduces the architecture and challenges that are associated with user mobility and bandwidth prediction approaches in virtualized LTE systems.
Resumo:
Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.
Resumo:
This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.
Resumo:
It has been hypothesized that results from the short term bioassays will ultimately provide information that will be useful for human health hazard assessment. Although toxicologic test systems have become increasingly refined, to date, no investigator has been able to provide qualitative or quantitative methods which would support the use of short term tests in this capacity.^ Historically, the validity of the short term tests have been assessed using the framework of the epidemiologic/medical screens. In this context, the results of the carcinogen (long term) bioassay is generally used as the standard. However, this approach is widely recognized as being biased and, because it employs qualitative data, cannot be used in the setting of priorities. In contrast, the goal of this research was to address the problem of evaluating the utility of the short term tests for hazard assessment using an alternative method of investigation.^ Chemical carcinogens were selected from the list of carcinogens published by the International Agency for Research on Carcinogens (IARC). Tumorigenicity and mutagenicity data on fifty-two chemicals were obtained from the Registry of Toxic Effects of Chemical Substances (RTECS) and were analyzed using a relative potency approach. The relative potency framework allows for the standardization of data "relative" to a reference compound. To avoid any bias associated with the choice of the reference compound, fourteen different compounds were used.^ The data were evaluated in a format which allowed for a comparison of the ranking of the mutagenic relative potencies of the compounds (as estimated using short term data) vs. the ranking of the tumorigenic relative potencies (as estimated from the chronic bioassays). The results were statistically significant (p $<$.05) for data standardized to thirteen of the fourteen reference compounds. Although this was a preliminary investigation, it offers evidence that the short term test systems may be of utility in ranking the hazards represented by chemicals which may be human carcinogens. ^
Resumo:
Mechanisms that allow pathogens to colonize the host are not the product of isolated genes, but instead emerge from the concerted operation of regulatory networks. Therefore, identifying components and the systemic behavior of networks is necessary to a better understanding of gene regulation and pathogenesis. To this end, I have developed systems biology approaches to study transcriptional and post-transcriptional gene regulation in bacteria, with an emphasis in the human pathogen Mycobacterium tuberculosis (Mtb). First, I developed a network response method to identify parts of the Mtb global transcriptional regulatory network utilized by the pathogen to counteract phagosomal stresses and survive within resting macrophages. As a result, the method unveiled transcriptional regulators and associated regulons utilized by Mtb to establish a successful infection of macrophages throughout the first 14 days of infection. Additionally, this network-based analysis identified the production of Fe-S proteins coupled to lipid metabolism through the alkane hydroxylase complex as a possible strategy employed by Mtb to survive in the host. Second, I developed a network inference method to infer the small non-coding RNA (sRNA) regulatory network in Mtb. The method identifies sRNA-mRNA interactions by integrating a priori knowledge of possible binding sites with structure-driven identification of binding sites. The reconstructed network was useful to predict functional roles for the multitude of sRNAs recently discovered in the pathogen, being that several sRNAs were postulated to be involved in virulence-related processes. Finally, I applied a combined experimental and computational approach to study post-transcriptional repression mediated by small non-coding RNAs in bacteria. Specifically, a probabilistic ranking methodology termed rank-conciliation was developed to infer sRNA-mRNA interactions based on multiple types of data. The method was shown to improve target prediction in Escherichia coli, and therefore is useful to prioritize candidate targets for experimental validation.
Resumo:
The selection of metrics for ecosystem restoration programs is critical for improving the quality of monitoring programs and characterizing project success. Moreover it is oftentimes very difficult to balance the importance of multiple ecological, social, and economical metrics. Metric selection process is a complex and must simultaneously take into account monitoring data, environmental models, socio-economic considerations, and stakeholder interests. We propose multicriteria decision analysis (MCDA) methods, broadly defined, for the selection of optimal sets of metrics to enhance evaluation of ecosystem restoration alternatives. Two MCDA methods, a multiattribute utility analysis (MAUT), and a probabilistic multicriteria acceptability analysis (ProMAA), are applied and compared for a hypothetical case study of a river restoration involving multiple stakeholders. Overall, the MCDA results in a systematic, unbiased, and transparent solution, informing restoration alternatives evaluation. The two methods provide comparable results in terms of selected metrics. However, because ProMAA can consider probability distributions for weights and utility values of metrics for each criteria, it is suggested as the best option if data uncertainty is high. Despite the increase in complexity in the metric selection process, MCDA improves upon the current ad-hoc decision practice based on the consultations with stakeholders and experts, and encourages transparent and quantitative aggregation of data and judgement, increasing the transparency of decision making in restoration projects. We believe that MCDA can enhance the overall sustainability of ecosystem by enhancing both ecological and societal needs.
Resumo:
From the water management perspective, water scarcity is an unacceptable risk of facing water shortages to serve water demands in the near future. Water scarcity may be temporary and related to drought conditions or other accidental situation, or may be permanent and due to deeper causes such as excessive demand growth, lack of infrastructure for water storage or transport, or constraints in water management. Diagnosing the causes of water scarcity in complex water resources systems is a precondition to adopt effective drought risk management actions. In this paper we present four indices which have been developed to evaluate water scarcity. We propose a methodology for interpretation of index values that can lead to conclusions about the reliability and vulnerability of systems to water scarcity, as well as to diagnose their possible causes and to propose solutions. The described methodology was applied to the Ebro river basin, identifying existing and expected problems and possible solutions. System diagnostics, based exclusively on the analysis of index values, were compared with the known reality as perceived by system managers, validating the conclusions in all cases
Resumo:
This work deals with quality level prediction in concrete structures through the helpful assistance of an expert system wich is able to apply reasoning to this field of structural engineering. Evidences, hypotheses and factors related to this human knowledge field have been codified into a Knowledge Base in terms of probabilities for the presence of either hypotheses or evidences,and conditional presence of both. Human experts in structural engineering and safety of structures gave their invaluable knowledge and assistance necessary when constructing the "computer knowledge body".
Resumo:
La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.
Resumo:
A membrane system is a massive parallel system, which is inspired by the living cells when processing information. As a part of unconventional computing, membrane systems are proven to be effective in solving complex problems. A new factor is introduced. This factor can decide whether a technique is worthwhile being used or not. The use of this factor provides the best chances for selecting the strategy for the rules application phase. Referring to the “best” is in reference to the one that reduces execution time within the membrane system. A pre-analysis of the membrane system determines the P-factor, which in return advises the optimal strategy to use. In particular, this paper compares the use of two strategies based on the P-factor and provides results upon the application of them. The paper concludes that the P-factor is an effective indicator for choosing the right strategy to implement the rules application phase in membrane systems.
Resumo:
Shading reduces the power output of a photovoltaic (PV) system. The design engineering of PV systems requires modeling and evaluating shading losses. Some PV systems are affected by complex shading scenes whose resulting PV energy losses are very difficult to evaluate with current modeling tools. Several specialized PV design and simulation software include the possibility to evaluate shading losses. They generally possess a Graphical User Interface (GUI) through which the user can draw a 3D shading scene, and then evaluate its corresponding PV energy losses. The complexity of the objects that these tools can handle is relatively limited. We have created a software solution, 3DPV, which allows evaluating the energy losses induced by complex 3D scenes on PV generators. The 3D objects can be imported from specialized 3D modeling software or from a 3D object library. The shadows cast by this 3D scene on the PV generator are then directly evaluated from the Graphics Processing Unit (GPU). Thanks to the recent development of GPUs for the video game industry, the shadows can be evaluated with a very high spatial resolution that reaches well beyond the PV cell level, in very short calculation times. A PV simulation model then translates the geometrical shading into PV energy output losses. 3DPV has been implemented using WebGL, which allows it to run directly from a Web browser, without requiring any local installation from the user. This also allows taken full benefits from the information already available from Internet, such as the 3D object libraries. This contribution describes, step by step, the method that allows 3DPV to evaluate the PV energy losses caused by complex shading. We then illustrate the results of this methodology to several application cases that are encountered in the world of PV systems design. Keywords: 3D, modeling, simulation, GPU, shading, losses, shadow mapping, solar, photovoltaic, PV, WebGL