916 resultados para modeling and prediction


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dinoflagellates possess large genomes in which most genes are present in many copies. This has made studies of their genomic organization and phylogenetics challenging. Recent advances in sequencing technology have made deep sequencing of dinoflagellate transcriptomes feasible. This dissertation investigates the genomic organization of dinoflagellates to better understand the challenges of assembling dinoflagellate transcriptomic and genomic data from short read sequencing methods, and develops new techniques that utilize deep sequencing data to identify orthologous genes across a diverse set of taxa. To better understand the genomic organization of dinoflagellates, a genomic cosmid clone of the tandemly repeated gene Alchohol Dehydrogenase (AHD) was sequenced and analyzed. The organization of this clone was found to be counter to prevailing hypotheses of genomic organization in dinoflagellates. Further, a new non-canonical splicing motif was described that could greatly improve the automated modeling and annotation of genomic data. A custom phylogenetic marker discovery pipeline, incorporating methods that leverage the statistical power of large data sets was written. A case study on Stramenopiles was undertaken to test the utility in resolving relationships between known groups as well as the phylogenetic affinity of seven unknown taxa. The pipeline generated a set of 373 genes useful as phylogenetic markers that successfully resolved relationships among the major groups of Stramenopiles, and placed all unknown taxa on the tree with strong bootstrap support. This pipeline was then used to discover 668 genes useful as phylogenetic markers in dinoflagellates. Phylogenetic analysis of 58 dinoflagellates, using this set of markers, produced a phylogeny with good support of all branches. The Suessiales were found to be sister to the Peridinales. The Prorocentrales formed a monophyletic group with the Dinophysiales that was sister to the Gonyaulacales. The Gymnodinales was found to be paraphyletic, forming three monophyletic groups. While this pipeline was used to find phylogenetic markers, it will likely also be useful for finding orthologs of interest for other purposes, for the discovery of horizontally transferred genes, and for the separation of sequences in metagenomic data sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Renewable energy technologies have long-term economic and environmental advantages over fossil fuels, and solar power is the most abundant renewable resource, supplying 120 PW over earth’s surface. In recent years the cost of photovoltaic modules has reached grid parity in many areas of the world, including much of the USA. A combination of economic and environmental factors has encouraged the adoption of solar technology and led to an annual growth rate in photovoltaic capacity of 76% in the US between 2010 and 2014. Despite the enormous growth of the solar energy industry, commercial unit efficiencies are still far below their theoretical limits. A push for thinner cells may reduce device cost and could potentially increase device performance. Fabricating thinner cells reduces bulk recombination, but at the cost of absorbing less light. This tradeoff generally benefits thinner devices due to reduced recombination. The effect continues up to a maximum efficiency where the benefit of reduced recombination is overwhelmed by the suppressed absorption. Light trapping allows the solar cell to circumvent this limitation and realize further performance gains (as well as continue cost reduction) from decreasing the device thickness. This thesis presents several advances in experimental characterization, theoretical modeling, and device applications for light trapping in thin-film solar cells. We begin by introducing light trapping strategies and discuss theoretical limits of light trapping in solar cells. This is followed by an overview of the equipment developed for light trapping characterization. Next we discuss our recent work measuring internal light scattering and a new model of scattering to predict the effects of dielectric nanoparticle back scatterers on thin-film device absorption. The new model is extended and generalized to arbitrary stacks of stratified media containing scattering structures. Finally, we investigate an application of these techniques using polymer dispersed liquid crystals to produce switchable solar windows. We show that these devices have the potential for self-powering.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The efficiency of current cargo screening processes at sea and air ports is unknown as no benchmarks exists against which they could be measured. Some manufacturer benchmarks exist for individual sensors but we have not found any benchmarks that take a holistic view of the screening procedures assessing a combination of sensors and also taking operator variability into account. Just adding up resources and manpower used is not an effective way for assessing systems where human decision-making and operator compliance to rules play a vital role. For such systems more advanced assessment methods need to be used, taking into account that the cargo screening process is of a dynamic and stochastic nature. Our project aim is to develop a decision support tool (cargo-screening system simulator) that will map the right technology and manpower to the right commodity-threat combination in order to maximize detection rates. In this paper we present a project outline and highlight the research challenges we have identified so far. In addition we introduce our first case study, where we investigate the cargo screening process at the ferry port in Calais.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Waterflooding is a technique largely applied in the oil industry. The injected water displaces oil to the producer wells and avoid reservoir pressure decline. However, suspended particles in the injected water may cause plugging of pore throats causing formation damage (permeability reduction) and injectivity decline during waterflooding. When injectivity decline occurs it is necessary to increase the injection pressure in order to maintain water flow injection. Therefore, a reliable prediction of injectivity decline is essential in waterflooding projects. In this dissertation, a simulator based on the traditional porous medium filtration model (including deep bed filtration and external filter cake formation) was developed and applied to predict injectivity decline in perforated wells (this prediction was made from history data). Experimental modeling and injectivity decline in open-hole wells is also discussed. The injectivity of modeling showed good agreement with field data, which can be used to support plan stimulation injection wells

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Queueing systems constitute a central tool in modeling and performance analysis. These types of systems are in our everyday life activities, and the theory of queueing systems was developed to provide models for forecasting behaviors of systems subject to random demand. The practical and useful applications of the discrete-time queues make the researchers to con- tinue making an e ort in analyzing this type of models. Thus the present contribution relates to a discrete-time Geo/G/1 queue in which some messages may need a second service time in addition to the rst essential service. In day-to-day life, there are numerous examples of queueing situations in general, for example, in manufacturing processes, telecommunication, home automation, etc, but in this paper a particular application is the use of video surveil- lance with intrusion recognition where all the arriving messages require the main service and only some may require the subsidiary service provided by the server with di erent types of strategies. We carry out a thorough study of the model, deriving analytical results for the stationary distribution. The generating functions of the number of messages in the queue and in the system are obtained. The generating functions of the busy period as well as the sojourn times of a message in the server, the queue and the system are also provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Optical waveguides have shown promising results for use within printed circuit boards. These optical waveguides have higher bandwidth than traditional copper transmission systems and are immune to electromagnetic interference. Design parameters for these optical waveguides are needed to ensure an optimal link budget. Modeling and simulation methods are used to determine the optimal design parameters needed in designing the waveguides. As a result, optical structures necessary for incorporating optical waveguides into printed circuit boards are designed and optimized. Embedded siloxane polymer waveguides are investigated for their use in optical printed circuit boards. This material was chosen because it has low absorption, high temperature stability, and can be deposited using common processing techniques. Two sizes of waveguides are investigated, 50 $unit{mu m}$ multimode and 4 - 9 $unit{mu m}$ single mode waveguides. A beam propagation method is developed for simulating the multimode and single mode waveguide parameters. The attenuation of simulated multimode waveguides are able to match the attenuation of fabricated waveguides with a root mean square error of 0.192 dB. Using the same process as the multimode waveguides, parameters needed to ensure a low link loss are found for single mode waveguides including maximum size, minimum cladding thickness, minimum waveguide separation, and minimum bend radius. To couple light out-of-plane to a transmitter or receiver, a structure such as a vertical interconnect assembly (VIA) is required. For multimode waveguides the optimal placement of a total internal reflection mirror can be found without prior knowledge of the waveguide length. The optimal placement is found to be either 60 µm or 150 µm away from the end of the waveguide depending on which metric a designer wants to optimize the average output power, the output power variance, or the maximum possible power loss. For single mode waveguides a volume grating coupler is designed to couple light from a silicon waveguide to a polymer single mode waveguide. A focusing grating coupler is compared to a perpendicular grating coupler that is focused by a micro-molded lens. The focusing grating coupler had an optical loss of over -14 dB, while the grating coupler with a lens had an optical loss of -6.26 dB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tropospheric ozone (O3) and carbon monoxide (CO) pollution in the Northern Hemisphere is commonly thought to be of anthropogenic origin. While this is true in most cases, copious quantities of pollutants are emitted by fires in boreal regions, and the impact of these fires on CO has been shown to significantly exceed the impact of urban and industrial sources during large fire years. The impact of boreal fires on ozone is still poorly quantified, and large uncertainties exist in the estimates of the fire-released nitrogen oxides (NO x ), a critical factor in ozone production. As boreal fire activity is predicted to increase in the future due to its strong dependence on weather conditions, it is necessary to understand how these fires affect atmospheric composition. To determine the scale of boreal fire impacts on ozone and its precursors, this work combined statistical analysis of ground-based measurements downwind of fires, satellite data analysis, transport modeling and the results of chemical model simulations. The first part of this work focused on determining boreal fire impact on ozone levels downwind of fires, using analysis of observations in several-days-old fire plumes intercepted at the Pico Mountain station (Azores). The results of this study revealed that fires significantly increase midlatitude summertime ozone background during high fire years, implying that predicted future increases in boreal wildfires may affect ozone levels over large regions in the Northern Hemisphere. To improve current estimates of NOx emissions from boreal fires, we further analyzed ΔNOy /ΔCO enhancement ratios in the observed fire plumes together with transport modeling of fire emission estimates. The results of this analysis revealed the presence of a considerable seasonal trend in the fire NOx /CO emission ratio due to the late-summer changes in burning properties. This finding implies that the constant NOx /CO emission ratio currently used in atmospheric modeling is unrealistic, and is likely to introduce a significant bias in the estimated ozone production. Finally, satellite observations were used to determine the impact of fires on atmospheric burdens of nitrogen dioxide (NO2 ) and formaldehyde (HCHO) in the North American boreal region. This analysis demonstrated that fires dominated the HCHO burden over the fires and in plumes up to two days old. This finding provides insights into the magnitude of secondary HCHO production and further enhances scientific understanding of the atmospheric impacts of boreal fires.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Carbon Monoxide (CO) and Ozone (O3) are considered to be one of the most important atmospheric pollutants in the troposphere with both having significant effects on human health. Both are included in the U.S. E.P.A list of criteria pollutants. CO is primarily emitted in the source region whereas O3 can be formed near the source, during transport of the pollution plumes containing O3 precursors or in a receptor region as the plumes subside. The long chemical lifetimes of both CO and O3 enable them to be transported over long distances. This transport is important on continental scales as well, commonly referred to as inter-continental transport and affects the concentrations of both CO and O3 in downwind receptor regions, thereby having significant implications for their air quality standards. Over the period 2001-2011, there have been decreases in the anthropogenic emissions of CO and NOx in North America and Europe whereas the emissions over Asia have increased. How these emission trends have affected concentrations at remote sites located downwind of these continents is an important question. The PICO-NARE observatory located on the Pico Mountain in Azores, Portugal is frequently impacted by North American pollution outflow (both anthropogenic and biomass burning) and is a unique site to investigate long range transport from North America. This study uses in-situ observations of CO and O3 for the period 2001-2011 at PICO-NARE coupled with output from the full chemistry (with normal and fixed anthropogenic emissions) and tagged CO simulations in GEOS-Chem, a global 3-D chemical transport model of atmospheric composition driven by meteorological input from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling and Assimilation Office, to determine and interpret the trends in CO and O3 concentrations over the past decade. These trends would be useful in ascertaining the impacts emission reductions in the United States have had over Pico and in general over the North Atlantic. A regression model with sinusoidal functions and a linear trend term was fit to the in-situ observations and the GEOS-Chem output for CO and O3 at Pico respectively. The regression model yielded decreasing trends for CO and O3 with the observations (-0.314 ppbv/year & -0.208 ppbv/year respectively) and the full chemistry simulation with normal emissions (-0.343 ppbv/year & -0.526 ppbv/year respectively). Based on analysis of the results from the full chemistry simulation with fixed anthropogenic emissions and the tagged CO simulation it was concluded that the decreasing trends in CO were a consequence of the anthropogenic emission changes in regions such as USA and Asia. The emission reductions in USA are countered by Asian increases but the former have a greater impact resulting in decreasing trends for CO at PICO-NARE. For O3 however, it is the increase in water vapor content (which increases O3 destruction) along the pathways of transport from North America to PICO-NARE as well as around the site that has resulted in decreasing trends over this period. This decrease is offset by increase in O3 concentrations due to anthropogenic influence which could be due to increasing Asian emissions of O3 precursors as these emissions have decreased over the US. However, the anthropogenic influence does not change the final direction of the trend. It can thus be concluded that CO and O3 concentrations at PICO-NARE have decreased over 2001-2011.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Social exchange theory and notions of reciprocity have long been assumed to explain the relationship between psychological contract breach and important employee outcomes. To date, however, there has been no explicit testing of these assumptions. This research, therefore, explores the mediating role of negative, generalized, and balanced reciprocity, in the relationships between psychological contract breach and employees’ affective organizational commitment and turnover intentions. A survey of 247 Pakistani employees of a large public university was analyzed using structural equation modeling and bootstrapping techniques, and provided excellent support for our model. As predicted, psychological contract breach was positively related to negative reciprocity norms and negatively related to generalized and balanced reciprocity norms. Negative and generalized (but not balanced) reciprocity were negatively and positively (respectively) related to employees’ affective organizational commitment and fully mediated the relationship between psychological contract breach and affective organizational commitment. Moreover, affective organizational commitment fully mediated the relationship between generalized and negative reciprocity and employees’ turnover intentions. Implications for theory and practice are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The behavioural agency theory was developed to provide a more comprehensive explanation and prediction of managerial risk taking, in response to some shortcomings of agency theory. In general, the theory offers explanations of why decision makers prefer some strategic choices to others. The use of behavioural agency theory in family business research has, however, been very limited. Family business scholars recently adapted this theory to construct the family business variant, the ‘socioemotional wealth’ construct, which offers better explanations for the risk taking and decision making behaviours of family firms. This chapter provides an overview of behavioural agency theory and the socioemotional wealth construct, explores how they have been used in family business research, and offers suggestions for how this theory can be used in further research to contribute to both the family business and the general management literature. Keywords: family business, behavioural agency theory, socioemotional wealth, family firm heterogeneity.