949 resultados para Urban system interactions, Micro-simulation, Neighbourhood scale, Population,Activities.
Resumo:
This paper develops an integrated optimal power flow (OPF) tool for distribution networks in two spatial scales. In the local scale, the distribution network, the natural gas network, and the heat system are coordinated as a microgrid. In the urban scale, the impact of natural gas network is considered as constraints for the distribution network operation. The proposed approach incorporates unbalance three-phase electrical systems, natural gas systems, and combined cooling, heating, and power systems. The interactions among the above three energy systems are described by energy hub model combined with components capacity constraints. In order to efficiently accommodate the nonlinear constraint optimization problem, particle swarm optimization algorithm is employed to set the control variables in the OPF problem. Numerical studies indicate that by using the OPF method, the distribution network can be economically operated. Also, the tie-line power can be effectively managed.
Resumo:
This paper is the second in a series of studies working towards constructing a realistic, evolving, non-potential coronal model for the solar magnetic carpet. In the present study, the interaction of two magnetic elements is considered. Our objectives are to study magnetic energy build-up, storage and dissipation as a result of emergence, cancellation, and flyby of these magnetic elements. In the future these interactions will be the basic building blocks of more complicated simulations involving hundreds of elements. Each interaction is simulated in the presence of an overlying uniform magnetic field, which lies at various orientations with respect to the evolving magnetic elements. For these three small-scale interactions, the free energy stored in the field at the end of the simulation ranges from 0.2 – 2.1×1026 ergs, whilst the total energy dissipated ranges from 1.3 – 6.3×1026 ergs. For all cases, a stronger overlying field results in higher energy storage and dissipation. For the cancellation and emergence simulations, motion perpendicular to the overlying field results in the highest values. For the flyby simulations, motion parallel to the overlying field gives the highest values. In all cases, the free energy built up is sufficient to explain small-scale phenomena such as X-ray bright points or nanoflares. In addition, if scaled for the correct number of magnetic elements for the volume considered, the energy continually dissipated provides a significant fraction of the quiet Sun coronal heating budget.
Resumo:
The protein folding problem has been one of the most challenging subjects in biological physics due to its complexity. Energy landscape theory based on statistical mechanics provides a thermodynamic interpretation of the protein folding process. We have been working to answer fundamental questions about protein-protein and protein-water interactions, which are very important for describing the energy landscape surface of proteins correctly. At first, we present a new method for computing protein-protein interaction potentials of solvated proteins directly from SAXS data. An ensemble of proteins was modeled by Metropolis Monte Carlo and Molecular Dynamics simulations, and the global X-ray scattering of the whole model ensemble was computed at each snapshot of the simulation. The interaction potential model was optimized and iterated by a Levenberg-Marquardt algorithm. Secondly, we report that terahertz spectroscopy directly probes hydration dynamics around proteins and determines the size of the dynamical hydration shell. We also present the sequence and pH-dependence of the hydration shell and the effect of the hydrophobicity. On the other hand, kinetic terahertz absorption (KITA) spectroscopy is introduced to study the refolding kinetics of ubiquitin and its mutants. KITA results are compared to small angle X-ray scattering, tryptophan fluorescence, and circular dichroism results. We propose that KITA monitors the rearrangement of hydrogen bonding during secondary structure formation. Finally, we present development of the automated single molecule operating system (ASMOS) for a high throughput single molecule detector, which levitates a single protein molecule in a 10 µm diameter droplet by the laser guidance. I also have performed supporting calculations and simulations with my own program codes.
Resumo:
Global environmental changes (GEC) such as climate change (CC) and climate variability have serious impacts in the tropics, particularly in Africa. These are compounded by changes in land use/land cover, which in turn are driven mainly by economic and population growth, and urbanization. These factors create a feedback loop, which affects ecosystems and particularly ecosystem services, for example plant-insect interactions, and by consequence agricultural productivity. We studied effects of GEC at a local level, using a traditional coffee production area in greater Nairobi, Kenya. We chose coffee, the most valuable agricultural commodity worldwide, as it generates income for 100 million people, mainly in the developing world. Using the coffee berry borer, the most serious biotic threat to global coffee production, we show how environmental changes and different production systems (shaded and sun-grown coffee) can affect the crop. We combined detailed entomological assessments with historic climate records (from 1929-2011), and spatial and demographic data, to assess GEC's impact on coffee at a local scale. Additionally, we tested the utility of an adaptation strategy that is simple and easy to implement. Our results show that while interactions between CC and migration/urbanization, with its resultant landscape modifications, create a feedback loop whereby agroecosystems such as coffee are adversely affected, bio-diverse shaded coffee proved far more resilient and productive than coffee grown in monoculture, and was significantly less harmed by its insect pest. Thus, a relatively simple strategy such as shading coffee can tremendously improve resilience of agro-ecosystems, providing small-scale farmers in Africa with an easily implemented tool to safeguard their livelihoods in a changing climate.
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
Esta dissertação apresenta a modelagem de uma ferramenta baseada em SMA para a simulação da produção e gestão social de um ecossistema urbano, a organização social do Projeto da Horta San Jeronimo(SJVG), localizado no Parque San Jeronimo Sevilha, Espanha, que e coordenado pela confederação Ecologistas en Accion . Estes processos sociais observados no projeto do SJVG são caracterizados pela ocorrência de uma serie de interações e trocas sociais entre os participantes. Além disso, os comportamentos periódicos, interações e comunicações são regulados pelo Regimento de Normas Internas, estabelecidos pela comunidade em assembleia, sob a supervisão e coordenação da confederação EA. O SMA foi concebido como um sistema JaCaMo multidimensional, composto por cinco dimensões integradas: a população de agentes, os artefatos normativos (a organização), os artefatos físicos (o ambiente dos agentes), artefatos de comunicação (o conjunto de interações) e os artefatos normativos (política normativa interna). A ferramenta utilizada no projeto e o framework JaCaMo, uma vez que apresenta suporte de alto nível e modularidade para o desenvolvimento das três primeiras dimensões acima mencionadas. Mesmo tendo enfrentado alguns problemas importantes que surgiram adotando o framework JaCaMo para desenvolvimento do Projeto SJVG-SMA, como: (i) a impossibilidade de especificação da periodicidade no modelo MOISE, (II) a impossibilidade de definir normas, seus atributos básicos (nome, periodicidade, papel a que se aplica) e as sanções, e (III) a inexistência de uma infraestrutura modular para a definição de interações através da comunicação, foi possível adotar soluções modulares interessantes para manter a ideia de um SMA de 5 dimensões, desenvolvidos na plataforma JaCaMo. As soluções apresentadas neste trabalho são baseadas principalmente no âmbito do Cartago, apontando também para a integração de artefatos organizacionais, normativos, físicos e de comunicação.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
This book is a synthesizing reflection on the Holocaust commemoration, in which space becomes a starting point for discussion. The author understands space primarily as an amalgam of physical and social components, where various commemorative processes may occur. The first part of the book draws attention to the material aspect of space, which determines its character and function. Material culture has been a long ignored and depreciated dimension of human culture in the humanities and social sciences, because it was perceived as passive and fully controlled by human will, and therefore insignificant in the course of social and historical processes. An example of the Nazi system perfectly illustrates how important were the restrictions and prohibitions on the usage of mundane objects, and in general, the whole material culture in relation to macro and micro space management — the state, cities, neighborhoods and houses, but also parks and swimming pools, factories and offices or shops and theaters. The importance of things and space was also clearly visible in exploitative policies present in overcrowded ghettos and concentration and death camps. For this very reason, when we study spatial forms of Holocaust commemoration, it should be acknowledged that the first traces, proofs and mementoes of the murdered were their things. The first "monuments" showing the enormity of the destruction are thus primarily gigantic piles of objects — shoes, glasses, toys, clothes, suitcases, toothbrushes, etc., which together with the extensive camps’ space try to recall the scale of a crime impossible to understand or imagine. The first chapter shows the importance of introducing the material dimension in thinking about space and commemoration, and it ends with a question about one of the key concepts for the book, a monument, which can be understood as both object (singular or plural) and architecture (sculptures, buildings, highways). However, the term monument tends to be used rather in a later and traditional sense, as an architectural, figurative form commemorating the heroic deeds, carved in stone or cast in bronze. Therefore, the next chapter reconstructs this narrower line of thinking, together with a discussion about what form a monument commemorating a subject as delicate and sensitive as the Holocaust should take on. This leads to an idea of the counter-monument, the concept which was supposed to be the answer to the mentioned representational dilemma on the one hand, and which would disassociate it from the Nazi’s traditional monuments on the other hand. This chapter clarifies the counter-monument definition and explains the misunderstandings and confusions generated on the basis of this concept by following the dynamics of the new commemorative form and by investigating monuments from the ‘80s and ‘90s erected in Germany. In the next chapter, I examine various forms of the Holocaust commemoration in Berlin, a city famous for its bold, monumental, and even controversial projects. We find among them the entire spectrum of memorials – big, monumental, and abstract forms, like Peter Eisenman’s Memorial to the Murdered Jews of Europe or Daniel Liebeskind’s Jewish Museum Berlin; flat, invisible, and employing the idea of emptiness, like Christian Boltanski’s Missing House or Micha Ullman’s Book Burning Memorial; the dispersed and decentralized, like Renata Stih and Frieder Schnock’s Memory Places or Gunter Demnig’s Stumbling Blocks. I enrich descriptions of the monuments by signaling at this point their second, extended life, which manifests itself in the alternative modes of (mis)use, consisting of various social activities or artistic performances. The formal wealth of the outlined projects creates a wide panorama of possible solutions to the Holocaust commemoration problems. However, the discussions accompanying the building of monuments and their "future life" after realization emphasize the importance of the social component that permeates the biography of the monument, and therefore significantly influences its foreseen design. The book also addresses the relationship of space, place and memory in a specific situation, when commemoration is performed secretly or remains as unrealized potential. Although place is the most common space associated with memory, today the nature of this relationship changes, and is what indicates popularity and employment of such terms as Marc Augé’s non-places or Pierre Nora’s site of memory. I include and develop these concepts about space and memory in my reflections to describe qualitatively different phenomena occurring in Central and Eastern European countries. These are unsettling places in rural areas like glades or parking lots, markets and playgrounds in urban settings. I link them to the post-war time and modernization processes and call them sites of non-memory and non-sites of memory. Another part of the book deals with a completely different form of commemoration called Mystery of memory. Grodzka Gate - NN Theatre in Lublin initiated it in 2000 and as a form it situates itself closer to the art of theater than architecture. Real spaces and places of everyday interactions become a stage for these performances, such as the “Jewish town” in Lublin or the Majdanek concentration camp. The minimalist scenography modifies space and reveals its previously unseen dimensions, while the actors — residents and people especially related to places like survivors and Righteous Among the Nations — are involved in the course of the show thanks to various rituals and symbolic gestures. The performance should be distinguished from social actions, because it incorporates tools known from religious rituals and art, which together saturate the mystery of memory with an aura of uniqueness. The last discussed commemoration mode takes the form of exposition space. I examine an exhibition concerning the fate of the incarcerated children presented in one of the barracks of the Majdanek State Museum in Lublin. The Primer – Children in Majdanek Camp is unique for several reasons. First, because even though it is exhibited in the camp barrack, it uses a completely different filter to tell the story of the camp in comparison to the exhibitions in the rest of the barracks. For this reason, one experiences immersing oneself in all subsequent levels of space and narrative accompanying them – at first, in a general narrative about the camp, and later in a specifically arranged space marked by children’s experiences, their language and thinking, and hence formed in a way more accessible for younger visitors. Second, the exhibition resigns from didacticism and distancing descriptions, and takes an advantage of eyewitnesses and survivors’ testimonies instead. Third, the exhibition space evokes an aura of strangeness similar to a fairy tale or a dream. It is accomplished thanks to the arrangement of various, usually highly symbolic material objects, and by favoring the fragrance and phonic sensations, movement, while belittling visual stimulations. The exhibition creates an impression of a place open to thinking and experiencing, and functions as an asylum, a radically different form to its camp surrounding characterized by a more overwhelming and austere space.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
Transportation system resilience has been the subject of several recent studies. To assess the resilience of a transportation network, however, it is essential to model its interactions with and reliance on other lifelines. In this work, a bi-level, mixed-integer, stochastic program is presented for quantifying the resilience of a coupled traffic-power network under a host of potential natural or anthropogenic hazard-impact scenarios. A two-layer network representation is employed that includes details of both systems. Interdependencies between the urban traffic and electric power distribution systems are captured through linking variables and logical constraints. The modeling approach was applied on a case study developed on a portion of the signalized traffic-power distribution system in southern Minneapolis. The results of the case study show the importance of explicitly considering interdependencies between critical infrastructures in transportation resilience estimation. The results also provide insights on lifeline performance from an alternative power perspective.
Resumo:
OBJECTIVES AND STUDY METHOD: There are two subjects in this thesis: “Lot production size for a parallel machine scheduling problem with auxiliary equipment” and “Bus holding for a simulated traffic network”. Although these two themes seem unrelated, the main idea is the optimization of complex systems. The “Lot production size for a parallel machine scheduling problem with auxiliary equipment” deals with a manufacturing setting where sets of pieces form finished products. The aim is to maximize the profit of the finished products. Each piece may be processed in more than one mold. Molds must be mounted on machines with their corresponding installation setup times. The key point of our methodology is to solve the single period lot-sizing decisions for the finished products together with the piece-mold and the mold-machine assignments, relaxing the constraint that a single mold may not be used in two machines at the same time. For the “Bus holding for a simulated traffic network” we deal with One of the most annoying problems in urban bus operations is bus bunching, which happens when two or more buses arrive at a stop nose to tail. Bus bunching reflects an unreliable service that affects transit operations by increasing passenger-waiting times. This work proposes a linear mathematical programming model that establishes bus holding times at certain stops along a transit corridor to avoid bus bunching. Our approach needs real-time input, so we simulate a transit corridor and apply our mathematical model to the data generated. Thus, the inherent variability of a transit system is considered by the simulation, while the optimization model takes into account the key variables and constraints of the bus operation. CONTRIBUTIONS AND CONCLUSIONS: For the “Lot production size for a parallel machine scheduling problem with auxiliary equipment” the relaxation we propose able to find solutions more efficiently, moreover our experimental results show that most of the solutions verify that molds are non-overlapping even if they are installed on several machines. We propose an exact integer linear programming, a Relax&Fix heuristic, and a multistart greedy algorithm to solve this problem. Experimental results on instances based on real-world data show the efficiency of our approaches. The mathematical model and the algorithm for the lot production size problem, showed in this research, can be used for production planners to help in the scheduling of the manufacturing. For the “Bus holding for a simulated traffic network” most of the literature considers quadratic models that minimize passenger-waiting times, but they are harder to solve and therefore difficult to operate by real-time systems. On the other hand, our methodology reduces passenger-waiting times efficiently given our linear programming model, with the characteristic of applying control intervals just every 5 minutes.
Resumo:
Early water resources modeling efforts were aimed mostly at representing hydrologic processes, but the need for interdisciplinary studies has led to increasing complexity and integration of environmental, social, and economic functions. The gradual shift from merely employing engineering-based simulation models to applying more holistic frameworks is an indicator of promising changes in the traditional paradigm for the application of water resources models, supporting more sustainable management decisions. This dissertation contributes to application of a quantitative-qualitative framework for sustainable water resources management using system dynamics simulation, as well as environmental systems analysis techniques to provide insights for water quality management in the Great Lakes basin. The traditional linear thinking paradigm lacks the mental and organizational framework for sustainable development trajectories, and may lead to quick-fix solutions that fail to address key drivers of water resources problems. To facilitate holistic analysis of water resources systems, systems thinking seeks to understand interactions among the subsystems. System dynamics provides a suitable framework for operationalizing systems thinking and its application to water resources problems by offering useful qualitative tools such as causal loop diagrams (CLD), stock-and-flow diagrams (SFD), and system archetypes. The approach provides a high-level quantitative-qualitative modeling framework for "big-picture" understanding of water resources systems, stakeholder participation, policy analysis, and strategic decision making. While quantitative modeling using extensive computer simulations and optimization is still very important and needed for policy screening, qualitative system dynamics models can improve understanding of general trends and the root causes of problems, and thus promote sustainable water resources decision making. Within the system dynamics framework, a growth and underinvestment (G&U) system archetype governing Lake Allegan's eutrophication problem was hypothesized to explain the system's problematic behavior and identify policy leverage points for mitigation. A system dynamics simulation model was developed to characterize the lake's recovery from its hypereutrophic state and assess a number of proposed total maximum daily load (TMDL) reduction policies, including phosphorus load reductions from point sources (PS) and non-point sources (NPS). It was shown that, for a TMDL plan to be effective, it should be considered a component of a continuous sustainability process, which considers the functionality of dynamic feedback relationships between socio-economic growth, land use change, and environmental conditions. Furthermore, a high-level simulation-optimization framework was developed to guide watershed scale BMP implementation in the Kalamazoo watershed. Agricultural BMPs should be given priority in the watershed in order to facilitate cost-efficient attainment of the Lake Allegan's TP concentration target. However, without adequate support policies, agricultural BMP implementation may adversely affect the agricultural producers. Results from a case study of the Maumee River basin show that coordinated BMP implementation across upstream and downstream watersheds can significantly improve cost efficiency of TP load abatement.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
The International FItness Scale (IFIS) is a self-reported measure of physical fitness that could easily. This scale has been validated in children, adolescents, and young adults; however, it is unknown whether the IFIS represents a valid and reliable estimate of physical fitness in Latino-American youth population. In the present study we aimed to examine the validity and reliability of the IFIS on a population-based sample of schoolchildren in Bogota, Colombia. Participants were 1,875 Colombian youth (56.2% girls) aged 9 to 17.9 years old. We measured adiposity markers (body fat, waist-to-height ratio, skinfold thicknesses and BMI), blood pressure, lipids profile, fasting glucose, and physical fitness level (self reported and measured). Also, a validated cardiometabolic risk index was used. An age- and sex-matched sample of 229 Schoolchildren originally not included in the study sample fulfilled IFIS twice for reliability purposes. Our data suggest that both measured and self-reported overall fitness were associated inversely with adiposity indicators and a cardiometabolic risk score. Overall, schoolchildren who self-reported “good” and “very good” fitness had better measured fitness than those who reported “very poor” and “poor” fitness (all p<0.001). Test–retest reliability of IFIS items was also good, with an average weighted Kappa of 0.811. Therefore, our findings suggest that self-reported fitness, as assessed by IFIS, is a valid, reliable, and health-related measure, and it can be a good alternative for future use in large studies with Latin-schoolchildren from Colombia.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.