983 resultados para Level Set Approximation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze how a set of 22 European countries was affected by increased Chinese export competition between 1995 and 2008. Employing product-group level data, we observe a reduction in the export volumes of European countries due to increased Chinese export competition. This deceleration in the export sector induces changes within the manufacturing industries, especially a decline in employment. When using more aggregated, regional-level data, our analysis shows that the industry sector as a whole declines, resulting in an increased unemployment rate. The importance of Chinese export competition for Europe is attributable to its high export intensity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many countries have set challenging wind power targets to achieve by 2020. This paper implements a realistic analysis of curtailment and constraint of wind energy at a nodal level using a unit commitment and economic dispatch model of the Irish Single Electricity Market in 2020. The key findings show that significant reduction in curtailment can be achieved when the system non-synchronous penetration limit increases from 65% to 75%. For the period analyzed, this results in a decreased total generation cost and a reduction in the dispatch-down of wind. However, some nodes experience significant dispatch-down of wind, which can be in the order of 40%. This work illustrates the importance of implementing analysis at a nodal level for the purpose of power system planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The water stored in and flowing through the subsurface is fundamental for sustaining human activities and needs, feeding water and its constituents to surface water bodies and supporting the functioning of their ecosystems. Quantifying the changes that affect the subsurface water is crucial for our understanding of its dynamics and changes driven by climate change and other changes in the landscape, such as in land-use and water-use. It is inherently difficult to directly measure soil moisture and groundwater levels over large spatial scales and long times. Models are therefore needed to capture the soil moisture and groundwater level dynamics over such large spatiotemporal scales. This thesis develops a modeling framework that allows for long-term catchment-scale screening of soil moisture and groundwater level changes. The novelty in this development resides in an explicit link drawn between catchment-scale hydroclimatic and soil hydraulics conditions, using observed runoff data as an approximation of soil water flux and accounting for the effects of snow storage-melting dynamics on that flux. Both past and future relative changes can be assessed by use of this modeling framework, with future change projections based on common climate model outputs. By direct model-observation comparison, the thesis shows that the developed modeling framework can reproduce the temporal variability of large-scale changes in soil water storage, as obtained from the GRACE satellite product, for most of 25 large study catchments around the world. Also compared with locally measured soil water content and groundwater level in 10 U.S. catchments, the modeling approach can reasonably well reproduce relative seasonal fluctuations around long-term average values. The developed modeling framework is further used to project soil moisture changes due to expected future climate change for 81 catchments around the world. The future soil moisture changes depend on the considered radiative forcing scenario (RCP) but are overall large for the occurrence frequency of dry and wet events and the inter-annual variability of seasonal soil moisture. These changes tend to be higher for the dry events and the dry season, respectively, than for the corresponding wet quantities, indicating increased drought risk for some parts of the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Requirement engineering is a key issue in the development of a software project. Like any other development activity it is not without risks. This work is about the empirical study of risks of requirements by applying machine learning techniques, specifically Bayesian networks classifiers. We have defined several models to predict the risk level for a given requirement using three dataset that collect metrics taken from the requirement specifications of different projects. The classification accuracy of the Bayesian models obtained is evaluated and compared using several classification performance measures. The results of the experiments show that the Bayesians networks allow obtaining valid predictors. Specifically, a tree augmented network structure shows a competitive experimental performance in all datasets. Besides, the relations established between the variables collected to determine the level of risk in a requirement, match with those set by requirement engineers. We show that Bayesian networks are valid tools for the automation of risks assessment in requirement engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robotics is an emergent branch of engineering that involves the conception, manufacture, and control of robots. It is a multidisciplinary field that combines electronics, design, computer science, artificial intelligence, mechanics and nanotechnology. Its evolution results in machines that are able to perform tasks with some level of complexity. Multi-agent systems is a researching topic within robotics, thus they allow the solving of higher complexity problems, through the execution of simple routines. Robotic soccer allows the study and development of robotics and multiagent systems, as the agents have to work together as a team, having in consideration most problems found in our quotidian, as for example adaptation to a highly dynamic environment as it is the one of a soccer game. CAMBADA is the robotic soccer team belonging to the group of research IRIS from IEETA, composed by teachers, researchers and students of the University of Aveiro, which annually has as main objective the participation in the RoboCup, in the Middle Size League. The purpose of this work is to improve the coordination in set pieces situations. This thesis introduces a new behavior and the adaptation of the already existing ones in the offensive situation, as well as the proposal of a new positioning method in defensive situations. The developed work was incorporated within the competition software of the robots. Which allows the presentation, in this dissertation, of the experimental results obtained, through simulation software as well as through the physical robots on the laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document does NOT address the issue of particle backscattering quality control (either real-time or delayed mode). As a preliminary step towards that goal, this document seeks to ensure that all countries deploying floats equipped with backscattering sensors document the data and metadata related to these floats properly. We produced this document in response to action item 9 from the first Bio-Argo Data Management meeting in Hyderabad (November 12-13, 2012). If the recommendations contained herein are followed, we will end up with a more uniform set of particle backscattering data within the Bio-Argo data system, allowing users to begin analyzing not only their own particle backscattering data, but also those of others, in the true spirit of Argo data sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document does NOT address the issue of chlorophyll-a quality control (either real-time or delayed mode). As a preliminary step towards that goal, this document seeks to ensure that all countries deploying floats equipped with chlorophyll-a sensors document the data and metadata related to these floats properly. We produced this document in response to action item 3 from the first Bio-Argo Data Management meeting in Hyderabad (November 12-13, 2012). If the recommendations contained herein are followed, we will end up with a more uniform set of chlorophyll-a data within the Bio-Argo data system, allowing users to begin analyzing not only their own chlorophyll-a data, but also those of others, in the true spirit of Argo data sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document does NOT address the issue of oxygen data quality control (either real-time or delayed mode). As a preliminary step towards that goal, this document seeks to ensure that all countries deploying floats equipped with oxygen sensors document the data and metadata related to these floats properly. We produced this document in response to action item 14 from the AST-10 meeting in Hangzhou (March 22-23, 2009). Action item 14: Denis Gilbert to work with Taiyo Kobayashi and Virginie Thierry to ensure DACs are processing oxygen data according to recommendations. If the recommendations contained herein are followed, we will end up with a more uniform set of oxygen data within the Argo data system, allowing users to begin analysing not only their own oxygen data, but also those of others, in the true spirit of Argo data sharing. Indications provided in this document are valid as of the date of writing this document. It is very likely that changes in sensors, calibrations and conversions equations will occur in the future. Please contact V. Thierry (vthierry@ifremer.fr) for any inconsistencies or missing information. A dedicated webpage on the Argo Data Management website (www) contains all information regarding Argo oxygen data management : current and previous version of this cookbook, oxygen sensor manuals, calibration sheet examples, examples of matlab code to process oxygen data, test data, etc..

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Datacenters have emerged as the dominant form of computing infrastructure over the last two decades. The tremendous increase in the requirements of data analysis has led to a proportional increase in power consumption and datacenters are now one of the fastest growing electricity consumers in the United States. Another rising concern is the loss of throughput due to network congestion. Scheduling models that do not explicitly account for data placement may lead to a transfer of large amounts of data over the network causing unacceptable delays. In this dissertation, we study different scheduling models that are inspired by the dual objectives of minimizing energy costs and network congestion in a datacenter. As datacenters are equipped to handle peak workloads, the average server utilization in most datacenters is very low. As a result, one can achieve huge energy savings by selectively shutting down machines when demand is low. In this dissertation, we introduce the network-aware machine activation problem to find a schedule that simultaneously minimizes the number of machines necessary and the congestion incurred in the network. Our model significantly generalizes well-studied combinatorial optimization problems such as hard-capacitated hypergraph covering and is thus strongly NP-hard. As a result, we focus on finding good approximation algorithms. Data-parallel computation frameworks such as MapReduce have popularized the design of applications that require a large amount of communication between different machines. Efficient scheduling of these communication demands is essential to guarantee efficient execution of the different applications. In the second part of the thesis, we study the approximability of the co-flow scheduling problem that has been recently introduced to capture these application-level demands. Finally, we also study the question, "In what order should one process jobs?'' Often, precedence constraints specify a partial order over the set of jobs and the objective is to find suitable schedules that satisfy the partial order. However, in the presence of hard deadline constraints, it may be impossible to find a schedule that satisfies all precedence constraints. In this thesis we formalize different variants of job scheduling with soft precedence constraints and conduct the first systematic study of these problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dataset for publication in PLOS One

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.

While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.

Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.

To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.

Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.

Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.

Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Buses are considered a slow, low comfort and low reliability transport system, thus its negative and por image. In the framework of the 3iBS project (2012), several examples of innovative and/or effective solutions regarding the Level of Service (LoS) were analysed aiming to provide operators, practitioners and policy makers with a set of Good Practice Guidelines to strengthen the competitiveness of the bus in the urban environment. The identification of the key indicators regarding vehicles, infrastructure and operation was possible through the analysis of a set of case studies -among which Barcelona (Spain), Cagliari (Italy), London (United Kingdom), Paris and Nantes (France). A cross comparison between the case studies was carried out for contrasting the level of achievement of the different criteria considered. The information provided on Regulatory, Financial and Technical issues allows the identification of a number of specific factors influencing the implementation of a high quality transport scheme, and set the basis for the elaboration of a set of Guidelines for the implementation of an intelligent, innovative and integrated bus system, including the main barriers to be tackled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to determine the optimal feeding level and feeding frequency for the culture of freshwater angelfish (Pterophyllum scalare). A randomized block design in a factorial scheme (3 × 2) with three feeding levels (30, 60 and 90 g/kg of body weight (BW)/day) and two feeding frequencies (1x and 2x/day) was set up in duplicate, representing 24 experimental units. Data were analyzed using two-way ANOVA and the Tukey test for comparison between means. After 84 days, results indicated that both factors influenced fish performance. No interaction between these factors was, however, observed. Increased feeding level and feeding frequency resulted in increased feed intake. The feed conversion ratio was negatively affected by feeding level, but not affected by feeding frequency. Final weights were higher when fish were fed twice daily, at levels of 60 or 90 g/kg BW/day. Specific growth rate was higher when fish received 60 or 90 g/kg BW/day, regardless of the feeding frequency. Survival was not affected by any treatment, with mean survival rates higher than 90%. It is recommended that juveniles be fed at a level of 60 g/kg BW/day with a minimum of two meals per day, to attain optimal survival, growth and feed efficiency.