3 resultados para Runs of homozygosity

em Digital Commons at Florida International University


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the Exxon Valdez accident in 1987, renewed interest has come forth to better understand and predict the fate and transport of crude oil lost to marine environments. The short-term fate of an Arabian Crude oil was simulated in laboratory experiments using artificial seawater. The time-dependent changes in the rheological and chemical properties of the oil under the influence of natural weathering processes were characterized, including dispersion behavior of the oil under simulated ocean turbulence. Methodology included monitoring the changes in the chemical composition of the oil by Gas Chromatography/Mass Spectrometry (GCMS), toxicity evaluations for the oil dispersions by Microtox analysis, and quantification of dispersed soluble aromatics by fluorescence spectrometry. Results for this oil show a sharp initial increase in viscosity, due to evaporative losses of lower molecular weight hydrocarbons, with the formation of stable water-in-oil emulsions occurring within one week. Toxicity evaluations indicate a decreased EC-50 value (higher toxicity) occurring after the oil has weathered eight hours, with maximum toxicity being observed after weathering seven days. Particle charge distributions, determined by electrophoretic techniques using a Coulter DELSA 440, reveal that an unstable oil dispersion exists within the size range of 1.5 to 2.5 um, with recombination processes being observed between sequential laser runs of a single sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.