872 resultados para Hydrologic Modeling Catchment and Runoff Computations
Resumo:
We introduce Collocation Games as the basis of a general framework for modeling, analyzing, and facilitating the interactions between the various stakeholders in distributed systems in general, and in cloud computing environments in particular. Cloud computing enables fixed-capacity (processing, communication, and storage) resources to be offered by infrastructure providers as commodities for sale at a fixed cost in an open marketplace to independent, rational parties (players) interested in setting up their own applications over the Internet. Virtualization technologies enable the partitioning of such fixed-capacity resources so as to allow each player to dynamically acquire appropriate fractions of the resources for unencumbered use. In such a paradigm, the resource management problem reduces to that of partitioning the entire set of applications (players) into subsets, each of which is assigned to fixed-capacity cloud resources. If the infrastructure and the various applications are under a single administrative domain, this partitioning reduces to an optimization problem whose objective is to minimize the overall deployment cost. In a marketplace, in which the infrastructure provider is interested in maximizing its own profit, and in which each player is interested in minimizing its own cost, it should be evident that a global optimization is precisely the wrong framework. Rather, in this paper we use a game-theoretic framework in which the assignment of players to fixed-capacity resources is the outcome of a strategic "Collocation Game". Although we show that determining the existence of an equilibrium for collocation games in general is NP-hard, we present a number of simplified, practically-motivated variants of the collocation game for which we establish convergence to a Nash Equilibrium, and for which we derive convergence and price of anarchy bounds. In addition to these analytical results, we present an experimental evaluation of implementations of some of these variants for cloud infrastructures consisting of a collection of multidimensional resources of homogeneous or heterogeneous capacities. Experimental results using trace-driven simulations and synthetically generated datasets corroborate our analytical results and also illustrate how collocation games offer a feasible distributed resource management alternative for autonomic/self-organizing systems, in which the adoption of a global optimization approach (centralized or distributed) would be neither practical nor justifiable.
Resumo:
Polymorphic microsatellite DNA loci were used here in three studies, one on Salmo salar and two on S. trutta. In the case of S. salar, the survival of native fish and non-natives from a nearby catchment, and their hybrids, were compared in a freshwater common garden experiment and subsequently in ocean ranching, with parental assignment utilising microsatellites. Overall survival of non-natives was 35% of natives. This differential survival was mainly in the oceanic phase. These results imply a genetic basis and suggest local adaptation can occur in salmonids across relatively small geographic distances which may have important implications for the management of salmon populations. In the first case study with S trutta, the species was investigated throughout its spread as an invasive in Newfoundland, eastern Canada. Genetic investigation confirmed historical records that the majority of introductions were from a Scottish hatchery and provided a clear example of the structure of two expanding waves of spread along coasts, probably by natural straying of anadromous individuals, to the north and south of the point of human introduction. This study showed a clearer example of the genetic anatomy of an invasion than in previous studies with brown trout, and may have implications for the management of invasive species in general. Finally, the genetics of anadromous S. trutta from the Waterville catchment in south western Ireland were studied. Two significantly different population groupings, from tributaries in geographically distinct locations entering the largest lake in the catchment, were identified. These results were then used to assign very large rod caught sea trout individuals (so called “specimen” sea trout) back to region of origin, in a Genetic Stock Identification exercise. This suggested that the majority of these large sea trout originated from one of the two tributary groups. These results are relevant for the understanding of sea trout population dynamics and for the future management of this and other sea trout producing catchments. This thesis has demonstrated new insights into the population structuring of salmonids both between and within catchments. While these chapters look at the existence and scale of genetic variation from different angles, it might be concluded that the overarching message from this thesis should be to highlight the importance of maintaining genetic diversity in salmonid populations as vital for their long-term productivity and resilience.
Resumo:
Obesity, currently an epidemic, is a difficult disease to combat because it is marked by both a change in body weight and an underlying dysregulation in metabolism, making consistent weight loss challenging. We sought to elucidate this metabolic dysregulation resulting from diet-induced obesity (DIO) that persists through subsequent weight loss. We hypothesized that weight gain imparts a change in “metabolic set point” persisting through subsequent weight loss and that this modification may involve a persistent change in hepatic AMP-activated protein kinase (AMPK), a key energy-sensing enzyme in the body. To test these hypotheses, we tracked metabolic perturbations through this period, measuring changes in hepatic AMPK. To further understand the role of AMPK we used AICAR, an AMPK activator, following DIO. Our findings established a more dynamic metabolic model of DIO and subsequent weight loss. We observed hepatic AMPK elevation following weight loss, but AICAR administration without similar dieting was unsuccessful in improving metabolic dysregulation. Our findings provide an approach to modeling DIO and subsequent dieting that can be built upon in future studies and hopefully contribute to more effective long-term treatments of obesity.
Resumo:
This paper demonstrates a modeling and design approach that couples computational mechanics techniques with numerical optimisation and statistical models for virtual prototyping and testing in different application areas concerning reliability of eletronic packages. The integrated software modules provide a design engineer in the electronic manufacturing sector with fast design and process solutions by optimizing key parameters and taking into account complexity of certain operational conditions. The integrated modeling framework is obtained by coupling the multi-phsyics finite element framework - PHYSICA - with the numerical optimisation tool - VisualDOC into a fully automated design tool for solutions of electronic packaging problems. Response Surface Modeling Methodolgy and Design of Experiments statistical tools plus numerical optimisaiton techniques are demonstrated as a part of the modeling framework. Two different problems are discussed and solved using the integrated numerical FEM-Optimisation tool. First, an example of thermal management of an electronic package on a board is illustrated. Location of the device is optimized to ensure reduced junction temperature and stress in the die subject to certain cooling air profile and other heat dissipating active components. In the second example thermo-mechanical simulations of solder creep deformations are presented to predict flip-chip reliability and subsequently used to optimise the life-time of solder interconnects under thermal cycling.
Computational modeling techniques for reliability of electronic components on printed circuit boards
Resumo:
This paper describes modeling technology and its use in providing data governing the assembly and subsequent reliability of electronic chip components on printed circuit boards (PCBs). Products, such as mobile phones, camcorders, intelligent displays, etc., are changing at a tremendous rate where newer technologies are being applied to satisfy the demands for smaller products with increased functionality. At ever decreasing dimensions, and increasing number of input/output connections, the design of these components, in terms of dimensions and materials used, is playing a key role in determining the reliability of the final assembly. Multiphysics modeling techniques are being adopted to predict a range of interacting physics-based phenomena associated with the manufacturing process. For example, heat transfer, solidification, marangoni fluid flow, void movement, and thermal-stress. The modeling techniques used are based on finite volume methods that are conservative and take advantage of being able to represent the physical domain using an unstructured mesh. These techniques are also used to provide data on thermal induced fatigue which is then mapped into product lifetime predictions.
Resumo:
The role of rhodopsin as a structural prototype for the study of the whole superfamily of G protein-coupled receptors (GPCRs) is reviewed in an historical perspective. Discovered at the end of the nineteenth century, fully sequenced since the early 1980s, and with direct three-dimensional information available since the 1990s, rhodopsin has served as a platform to gather indirect information on the structure of the other superfamily members. Recent breakthroughs have elicited the solution of the structures of additional receptors, namely the beta 1- and beta 2-adrenergic receptors and the A(2A) adenosine receptor, now providing an opportunity to gauge the accuracy of homology modeling and molecular docking techniques and to perfect the computational protocol. Notably, in coordination with the solution of the structure of the A(2A) adenosine receptor, the first "critical assessment of GPCR structural modeling and docking" has been organized, the results of which highlighted that the construction of accurate models, although challenging, is certainly achievable. The docking of the ligands and the scoring of the poses clearly emerged as the most difficult components. A further goal in the field is certainly to derive the structure of receptors in their signaling state, possibly in complex with agonists. These advances, coupled with the introduction of more sophisticated modeling algorithms and the increase in computer power, raise the expectation for a substantial boost of the robustness and accuracy of computer-aided drug discovery techniques in the coming years.
Resumo:
One way to restore physiological blood flow to occluded arteries involves the deformation of plaque using an intravascular balloon and preventing elastic recoil using a stent. Angioplasty and stent implantation cause unphysiological loading of the arterial tissue, which may lead to tissue in-growth and reblockage; termed “restenosis.” In this paper, a computational methodology for predicting the time-course of restenosis is presented. Stress-induced damage, computed using a remaining life approach, stimulates inflammation (production of matrix degrading factors and growth stimuli). This, in turn, induces a change in smooth muscle cell phenotype from contractile (as exists in the quiescent tissue) to synthetic (as exists in the growing tissue). In this paper, smooth muscle cell activity (migration, proliferation, and differentiation) is simulated in a lattice using a stochastic approach to model individual cell activity. The inflammation equations are examined under simplified loading cases. The mechanobiological parameters of the model were estimated by calibrating the model response to the results of a balloon angioplasty study in humans. The simulation method was then used to simulate restenosis in a two dimensional model of a stented artery. Cell activity predictions were similar to those observed during neointimal hyperplasia, culminating in the growth of restenosis. Similar to experiment, the amount of neointima produced increased with the degree of expansion of the stent, and this relationship was found to be highly dependant on the prescribed inflammatory response. It was found that the duration of inflammation affected the amount of restenosis produced, and that this effect was most pronounced with large stent expansions. In conclusion, the paper shows that the arterial tissue response to mechanical stimulation can be predicted using a stochastic cell modeling approach, and that the simulation captures features of restenosis development observed with real stents. The modeling approach is proposed for application in three dimensional models of cardiovascular stenting procedures.
Resumo:
Melt viscosity is a key indicator of product quality in polymer extrusion processes. However, real time monitoring and control of viscosity is difficult to achieve. In this article, a novel “soft sensor” approach based on dynamic gray-box modeling is proposed. The soft sensor involves a nonlinear finite impulse response model with adaptable linear parameters for real-time prediction of the melt viscosity based on the process inputs; the model output is then used as an input of a model with a simple-fixed structure to predict the barrel pressure which can be measured online. Finally, the predicted pressure is compared to the measured value and the corresponding error is used as a feedback signal to correct the viscosity estimate. This novel feedback structure enables the online adaptability of the viscosity model in response to modeling errors and disturbances, hence producing a reliable viscosity estimate. The experimental results on different material/die/extruder confirm the effectiveness of the proposed “soft sensor” method based on dynamic gray-box modeling for real-time monitoring and control of polymer extrusion processes. POLYM. ENG. SCI., 2012. © 2012 Society of Plastics Engineers
Resumo:
Groundwater flow in hard-rock aquifers is strongly controlled by the characteristics and distribution of structural heterogeneity. A methodology for catchment-scale characterisation is presented, based on the integration of complementary, multi-scale hydrogeological, geophysical and geological approaches. This was applied to three contrasting catchments underlain by metamorphic rocks in the northern parts of Ireland (Republic of Ireland and Northern Ireland, UK). Cross-validated surface and borehole geophysical investigations confirm the discontinuous overburden, lithological compartmentalisation of the bedrock and important spatial variations of the weathered bedrock profiles at macro-scale. Fracture analysis suggests that the recent (Alpine) tectonic fabric exerts strong control on the internal aquifer structure at meso-scale, which is likely to impact on the anisotropy of aquifer properties. The combination of the interpretation of depth-specific hydraulic-test data with the structural information provided by geophysical tests allows characterisation of the hydrodynamic properties of the identified aquifer units. Regionally, the distribution of hydraulic conductivities can be described by inverse power laws specific to the aquifer litho-type. Observed groundwater flow directions reflect this multi-scale structure. The proposed integrated approach applies widely available investigative tools to identify key dominant structures controlling groundwater flow, characterising the aquifer type for each catchment and resolving the spatial distribution of relevant aquifer units and associated hydrodynamic parameters.
Resumo:
The ability of millimetre wave and terahertz systems to penetrate clothing is well known. The fact that the transmission of clothing and the reflectivity of the body vary as a function of frequency is less so. Several instruments have now been developed to exploit this capability. The choice of operating frequency, however, has often been associated with the maturity and the cost of the enabling technology rather than a sound systems engineering approach. Top level user and systems requirements have been derived to inform the development of design concepts. Emerging micro and nano technology concepts have been reviewed and we have demonstrated how these can be evaluated against these requirements by simulation using OpenFx. Openfx is an open source suite of 3D tools for modeling, animation and visualization which has been modified for use at millimeter waves. © 2012 SPIE.
Resumo:
The helminth parasite Fasciola hepatica secretes cysteine proteases to facilitate tissue invasion, migration, and development within the mammalian host. The major proteases cathepsin L1 (FheCL1) and cathepsin L2 (FheCL2) were recombinantly produced and biochemically characterized. By using site-directed mutagenesis, we show that residues at position 67 and 205, which lie within the S2 pocket of the active site, are critical in determining the substrate and inhibitor specificity. FheCL1 exhibits a broader specificity and a higher substrate turnover rate compared with FheCL2. However, FheCL2 can efficiently cleave substrates with a Pro in the P2 position and degrade collagen within the triple helices at physiological pH, an activity that among cysteine proteases has only been reported for human cathepsin K. The 1.4-A three-dimensional structure of the FheCL1 was determined by x-ray crystallography, and the three-dimensional structure of FheCL2 was constructed via homology-based modeling. Analysis and comparison of these structures and our biochemical data with those of human cathepsins L and K provided an interpretation of the substrate-recognition mechanisms of these major parasite proteases. Furthermore, our studies suggest that a configuration involving residue 67 and the "gatekeeper" residues 157 and 158 situated at the entrance of the active site pocket create a topology that endows FheCL2 with its unusual collagenolytic activity. The emergence of a specialized collagenolytic function in Fasciola likely contributes to the success of this tissue-invasive parasite.
Resumo:
Real time digital signal processing demands high performance implementations of division and square root. This can only be achieved by the design of fast and efficient arithmetic algorithms which address practical VLSI architectural design issues. In this paper, new algorithms for division and square root are described. The new schemes are based on pre-scaling the operands and modifying the classical SRT method such that the result digits and the remainders are computed concurrently and the computations in adjacent rows are overlapped. Consequently, their performance exceeds that of the SRT methods. The hardware cost for higher radices is considerably more than that of the SRT methods but for many applications, this is not prohibitive. A system of equations is presented which enables both an analysis of the method for any radix and the parameters of implementations to be easily determined. This is illustrated for the case of radix 2 and radix 4. In addition, a highly regular array architecture combining the division and square root method is described. © 1994 Kluwer Academic Publishers.
Resumo:
In real time digital signal processing, high performance modules for division and square root are essential if many powerful algorithms are to be implemented. In this paper, a new radix 2 algorithms for SRT division and square root are developed. For these new schemes, the result digits and the residuals are computed concurrently and the computations in adjacent rows are overlapped. Consequently, their performance should exceed that of the radix 2 SRT methods. VLSI array architectures to implement the new division and square root schemes are also presented.
Resumo:
Earlier palynological studies of lake sediments from Easter Island suggest that the island underwent a recent and abrupt replacement of palm-dominated forests by grasslands, interpreted as a deforestation by indigenous people. However, the available evidence is inconclusive due to the existence of extended hiatuses and ambiguous chronological frameworks in most of the sedimentary sequences studied. This has given rise to an ongoing debate about the timing and causes of the assumed ecological degradation and cultural breakdown. Our multiproxy study of a core recovered from Lake Raraku highlights the vegetation dynamics and environmental shifts in the catchment and its surroundings during the late Holocene. The sequence contains shorter hiatuses than in previously recovered cores and provides a more continuous history of environmental changes. The results show a long, gradual and stepped landscape shift from palm-dominated forests to grasslands. This change started c. 450 BC and lasted about two thousand years. The presence of Verbena litoralis, a common weed, which is associated with human activities in the pollen record, the significant correlation between shifts in charcoal influx, and the dominant pollen types suggest human disturbance of the vegetation. Therefore, human settlement on the island occurred c. 450 BC, some 1500 years earlier than is assumed. Climate variability also exerted a major influence on environmental changes. Two sedimentary gaps in the record are interpreted as periods of droughts that could have prevented peat growth and favoured its erosion during the Medieval Climate Anomaly and the Little Ice Age, respectively. At c. AD 1200, the water table rose and the former Raraku mire turned into a shallow lake, suggesting higher precipitation/evaporation rates coeval with a cooler and wetter Pan-Pacific AD 1300 event. Pollen and diatom records show large vegetation changes due to human activities c. AD 1200. Other recent vegetation changes also due to human activities entail the introduction of taxa (e.g. Psidium guajava, Eucalyptus sp.) and the disappearance of indigenous plants such as Sophora toromiro during the two last centuries. Although the evidence is not conclusive, the American origin of V. litoralis re-opens the debate about the possible role of Amerindians in the human colonisation of Easter Island.
Resumo:
Due to increasing water scarcity, accelerating industrialization and urbanization, efficiency of irrigation water use in Northern China needs urgent improvement. Based on a sample of 347 wheat growers in the Guanzhong Plain, this paper simultaneously estimates a production function, and its corresponding first-order conditions for cost minimization, to analyze efficiency of irrigation water use. The main findings are that average technical, allocative, and overall economic efficiency are 0.35, 0.86 and 0.80, respectively. In a second stage analysis, we find that farmers’ perception of water scarcity, water price and irrigation infrastructure increase irrigation water allocative efficiency, while land fragmentation decreases it. We also show that farmers’ income loss due to higher water prices can be offset by increasing irrigation water use efficiency.