869 resultados para agent based model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most empirical studies support a decline in speciation rates through time, although evidence for constant speciation rates also exists. Declining rates have been explained by invoking pre-existing niches, whereas constant rates have been attributed to non-adaptive processes such as sexual selection and mutation. Trends in speciation rate and the processes underlying it remain unclear, representing a critical information gap in understanding patterns of global diversity. Here we show that the temporal trend in the speciation rate can also be explained by frequency-dependent selection. We construct a frequency-dependent and DNA sequence-based model of speciation. We compare our model to empirical diversity patterns observed for cichlid fish and Darwin's finches, two classic systems for which speciation rates and richness data exist. Negative frequency-dependent selection predicts well both the declining speciation rate found in cichlid fish and explains their species richness. For groups like the Darwin's finches, in which speciation rates are constant and diversity is lower, speciation rate is better explained by a model without frequency-dependent selection. Our analysis shows that differences in diversity may be driven by incipient species abundance with frequency-dependent selection. Our results demonstrate that genetic-distance-based speciation and frequency-dependent selection are sufficient to explain the high diversity observed in natural systems and, importantly, predict decay through time in speciation rate in the absence of pre-existing niches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lesni Potok stream drains a forested headwater catchment in the central Czech Republic. It was artificially acidified with hydrochloric acid (HCl) for four hours to assess the role of stream substrate in acid-neutralisation and recovery. The pH was lowered from 4.7 to 3.2. Desorption of Ca and MP and desorption or solution of Al dominated acid-neutralisation; Al mobilisation was more important later. The stream substrate released 4.542 meq Ca, 1, 184 meq Mg, and 2,329 meq Al over a 45 in long and I in wide stream segment, smaller amounts of Be. Cd, Fe, and Mn were released. Adsorption of SO42- and desorption of F- occurred during the acidification phase of the experiment. The exchange reactions were rapidly reversible for Ca, Mg and SO42- but not symmetric as the substrate resorbed 1083, 790 and 0 meq Ca, Mg, and Al. respectively, in a 4-hour recovery period. Desorption of SO42- occurred during the resorption of Ca and Mg. These exchange and dissolution reactions delay acidification, diminish the pH depression and retard recovery from episodic acidification. The behaviour of the stream substrate-water interaction resembles that for soil-soil water interactions. A mathematical dynamic mass-balance based model, MASS (Modelling Acidification of Stream Sediments), was developed which simulates the adsorption and desorption of base cations during the experiment and was successfully calibrated to the experimental data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this study was to develop a GST-based methodology for accurately measuring the degree of transverse isotropy in trabecular bone. Using femoral sub-regions scanned in high-resolution peripheral QCT (HR-pQCT) and clinical-level-resolution QCT, trabecular orientation was evaluated using the mean intercept length (MIL) and the gradient structure tensor (GST) on the HR-pQCT and QCT data, respectively. The influence of local degree of transverse isotropy (DTI) and bone mineral density (BMD) was incorporated into the investigation. In addition, a power based model was derived, rendering a 1:1 relationship between GST and MIL eigenvalues. A specific DTI threshold (DTI thres) was found for each investigated size of region of interest (ROI), above which the estimate of major trabecular direction of the GST deviated no more than 30° from the gold standard MIL in 95% of the remaining ROIs (mean error: 16°). An inverse relationship between ROI size and DTI thres was found for discrete ranges of BMD. A novel methodology has been developed, where transversal isotropic measures of trabecular bone can be obtained from clinical QCT images for a given ROI size, DTI thres and power coefficient. Including DTI may improve future clinical QCT finite-element predictions of bone strength and diagnoses of bone disease.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the current challenges in evolutionary ecology is understanding the long-term persistence of contemporary-evolving predator–prey interactions across space and time. To address this, we developed an extension of a multi-locus, multi-trait eco-evolutionary individual-based model that incorporates several interacting species in explicit landscapes. We simulated eco-evolutionary dynamics of multiple species food webs with different degrees of connectance across soil-moisture islands. A broad set of parameter combinations led to the local extinction of species, but some species persisted, and this was associated with (1) high connectance and omnivory and (2) ongoing evolution, due to multi-trait genetic variability of the embedded species. Furthermore, persistence was highest at intermediate island distances, likely because of a balance between predation-induced extinction (strongest at short island distances) and the coupling of island diversity by top predators, which by travelling among islands exert global top-down control of biodiversity. In the simulations with high genetic variation, we also found widespread trait evolutionary changes indicative of eco-evolutionary dynamics. We discuss how the ever-increasing computing power and high-resolution data availability will soon allow researchers to start bridging the in vivo–in silico gap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The urate transporter, GLUT9, is responsible for the basolateral transport of urate in the proximal tubule of human kidneys and in the placenta, playing a central role in uric acid homeostasis. GLUT9 shares the least homology with other members of the glucose transporter family, especially with the glucose transporting members GLUT1-4 and is the only member of the GLUT family to transport urate. The recently published high-resolution structure of XylE, a bacterial D-xylose transporting homologue, yields new insights into the structural foundation of this GLUT family of proteins. While this represents a huge milestone, it is unclear if human GLUT9 can benefit from this advancement through subsequent structural based targeting and mutagenesis. Little progress has been made toward understanding the mechanism of GLUT9 since its discovery in 2000. Before work can begin on resolving the mechanisms of urate transport we must determine methods to express, purify and analyze hGLUT9 using a model system adept in expressing human membrane proteins. Here, we describe the surface expression, purification and isolation of monomeric protein, and functional analysis of recombinant hGLUT9 using the Xenopus laevis oocyte system. In addition, we generated a new homology-based high-resolution model of hGLUT9 from the XylE crystal structure and utilized our purified protein to generate a low-resolution single particle reconstruction. Interestingly, we demonstrate that the functional protein extracted from the Xenopus system fits well with the homology-based model allowing us to generate the predicted urate-binding pocket and pave a path for subsequent mutagenesis and structure-function studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While most previous research has considered public service motivation (PSM) as the only motivational factor predicting (public) job choice, the authors present a novel, rational choice-based model which includes three motivational dimensions: extrinsic, enjoyment-based intrinsic and prosocial intrinsic. Besides providing more accurate person-job fit predictions, this new approach fills a significant research gap and facilitates future theory building.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Facilitation is a major force shaping the structure and diversity of plant communities in terrestrial ecosystems. Detecting positive plant–plant interactions relies on the combination of field experimentation and the demonstration of spatial association between neighboring plants. This has often restricted the study of facilitation to particular sites, limiting the development of systematic assessments of facilitation over regional and global scales. Here we explore whether the frequency of plant spatial associations detected from high-resolution remotely sensed images can be used to infer plant facilitation at the community level in drylands around the globe. We correlated the information from remotely sensed images freely available through Google Earth with detailed field assessments, and used a simple individual-based model to generate patch-size distributions using different assumptions about the type and strength of plant–plant interactions. Most of the patterns found from the remotely sensed images were more right skewed than the patterns from the null model simulating a random distribution. This suggests that the plants in the studied drylands show stronger spatial clustering than expected by chance. We found that positive plant co-occurrence, as measured in the field, was significantly related to the skewness of vegetation patch-size distribution measured using Google Earth images. Our findings suggest that the relative frequency of facilitation may be inferred from spatial pattern signals measured from remotely sensed images, since facilitation often determines positive co-occurrence among neighboring plants. They pave the road for a systematic global assessment of the role of facilitation in terrestrial ecosystems. Read More: http://www.esajournals.org/doi/10.1890/14-2358.1

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND AND AIMS Hepatitis C (HCV) is a leading cause of morbidity and mortality in people who live with HIV. In many countries, access to direct acting antiviral agents to treat HCV is restricted to individuals with advanced liver disease (METAVIR stage F3 or F4). Our goal was to estimate the long term impact of deferring HCV treatment for men who have sex with men (MSM) who are coinfected with HIV and often have multiple risk factors for liver disease progression. METHODS We developed an individual-based model of liver disease progression in HIV/HCV coinfected men who have sex with men. We estimated liver-related morbidity and mortality as well as the median time spent with replicating HCV infection when individuals were treated in liver fibrosis stages F0, F1, F2, F3 or F4 on the METAVIR scale. RESULTS The percentage of individuals who died of liver-related complications was 2% if treatment was initiated in F0 or F1. It increased to 3% if treatment was deferred until F2, 7% if it was deferred until F3 and 22% if deferred until F4. The median time individuals spent with replicating HCV increased from 5 years if treatment was initiated in F2 to almost 15 years if it was deferred until F4. CONCLUSIONS Deferring HCV therapy until advanced liver fibrosis is established could increase liver-related morbidity and mortality in HIV/HCV coinfected individuals, and substantially prolong the time individuals spend with replicating HCV infection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The shift from host-centric to information-centric networking (ICN) promises seamless communication in mobile networks. However, most existing works either consider well-connected networks with high node density or introduce modifications to {ICN} message processing for delay-tolerant Networking (DTN). In this work, we present agent-based content retrieval, which provides information-centric {DTN} support as an application module without modifications to {ICN} message processing. This enables flexible interoperability in changing environments. If no content source can be found via wireless multi-hop routing, requesters may exploit the mobility of neighbor nodes (called agents) by delegating content retrieval to them. Agents that receive a delegation and move closer to content sources can retrieve data and return it back to requesters. We show that agent-based content retrieval may be even more efficient in scenarios where multi-hop communication is possible. Furthermore, we show that broadcast communication may not be necessarily the best option since dynamic unicast requests have little overhead and can better exploit short contact times between nodes (no broadcast delays required for duplicate suppression).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The social processes that lead to destructive behavior in celebratory crowds can be studied through an agent-based computer simulation. Riots are an increasingly common outcome of sports celebrations, and pose the potential for harm to participants, bystanders, property, and the reputation of the groups with whom participants are associated. Rioting cannot necessarily be attributed to the negative emotions of individuals, such as anger, rage, frustration and despair. For instance, the celebratory behavior (e.g., chanting, cheering, singing) during UConn’s “Spring Weekend” and after the 2004 NCAA Championships resulted in several small fires and overturned cars. Further, not every individual in the area of a riot engages in violence, and those who do, do not do so continuously. Instead, small groups carry out the majority of violent acts in relatively short-lived episodes. Agent-based computer simulations are an ideal method for modeling complex group-level social phenomena, such as celebratory gatherings and riots, which emerge from the interaction of relatively “simple” individuals. By making simple assumptions about individuals’ decision-making and behaviors and allowing actors to affect one another, behavioral patterns emerge that cannot be predicted by the characteristics of individuals. The computer simulation developed here models celebratory riot behavior by repeatedly evaluating a single algorithm for each individual, the inputs of which are affected by the characteristics of nearby actors. Specifically, the simulation assumes that (a) actors possess 1 of 5 distinct social identities (group memberships), (b) actors will congregate with actors who possess the same identity, (c) the degree of social cohesion generated in the social context determines the stability of relationships within groups, and (d) actors’ level of aggression is affected by the aggression of other group members. Not only does this simulation provide a systematic investigation of the effects of the initial distribution of aggression, social identification, and cohesiveness on riot outcomes, but also an analytic tool others may use to investigate, visualize and predict how various individual characteristics affect emergent crowd behavior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Microarray technology is a high-throughput method for genotyping and gene expression profiling. Limited sensitivity and specificity are one of the essential problems for this technology. Most of existing methods of microarray data analysis have an apparent limitation for they merely deal with the numerical part of microarray data and have made little use of gene sequence information. Because it's the gene sequences that precisely define the physical objects being measured by a microarray, it is natural to make the gene sequences an essential part of the data analysis. This dissertation focused on the development of free energy models to integrate sequence information in microarray data analysis. The models were used to characterize the mechanism of hybridization on microarrays and enhance sensitivity and specificity of microarray measurements. ^ Cross-hybridization is a major obstacle factor for the sensitivity and specificity of microarray measurements. In this dissertation, we evaluated the scope of cross-hybridization problem on short-oligo microarrays. The results showed that cross hybridization on arrays is mostly caused by oligo fragments with a run of 10 to 16 nucleotides complementary to the probes. Furthermore, a free-energy based model was proposed to quantify the amount of cross-hybridization signal on each probe. This model treats cross-hybridization as an integral effect of the interactions between a probe and various off-target oligo fragments. Using public spike-in datasets, the model showed high accuracy in predicting the cross-hybridization signals on those probes whose intended targets are absent in the sample. ^ Several prospective models were proposed to improve Positional Dependent Nearest-Neighbor (PDNN) model for better quantification of gene expression and cross-hybridization. ^ The problem addressed in this dissertation is fundamental to the microarray technology. We expect that this study will help us to understand the detailed mechanism that determines sensitivity and specificity on the microarrays. Consequently, this research will have a wide impact on how microarrays are designed and how the data are interpreted. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurement of the absorbed dose from ionizing radiation in medical applications is an essential component to providing safe and reproducible patient care. There are a wide variety of tools available for measuring radiation dose; this work focuses on the characterization of two common, solid-state dosimeters in medical applications: thermoluminescent dosimeters (TLD) and optically stimulated luminescent dosimeters (OSLD). There were two main objectives to this work. The first objective was to evaluate the energy dependence of TLD and OSLD for non-reference measurement conditions in a radiotherapy environment. The second objective was to fully characterize the OSLD nanoDot in a CT environment, and to provide validated calibration procedures for CT dose measurement using OSLD. Current protocols for dose measurement using TLD and OSLD generally assume a constant photon energy spectrum within a nominal beam energy regardless of measurement location, tissue composition, or changes in beam parameters. Variations in the energy spectrum of therapeutic photon beams may impact the response of TLD and OSLD and could thereby result in an incorrect measure of dose unless these differences are accounted for. In this work, we used a Monte Carlo based model to simulate variations in the photon energy spectra of a Varian 6MV beam; then evaluated the impact of the perturbations in energy spectra on the response of both TLD and OSLD using Burlin Cavity Theory. Energy response correction factors were determined for a range of conditions and compared to measured correction factors with good agreement. When using OSLD for dose measurement in a diagnostic imaging environment, photon energy spectra are often referenced to a therapy-energy or orthovoltage photon beam – commonly 250kVp, Co-60, or even 6MV, where the spectra are substantially different. Appropriate calibration techniques specifically for the OSLD nanoDot in a CT environment have not been presented in the literature; furthermore the dependence of the energy response of the calibration energy has not been emphasized. The results of this work include detailed calibration procedures for CT dosimetry using OSLD, and a full characterization of this dosimetry system in a low-dose, low-energy setting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study evaluated a modified home-based model of family preservation services, the long-term community case management model, as operationalized by a private child welfare agency that serves as the last resort for hard-to-serve families with children at severe risk of out-of-home placement. The evaluation used a One-Group Pretest-Posttest design with a modified time-series design to determine if the intervention would produce a change over time in the composite score of each family's Child Well-Being Scales (CWBS). A comparison of the mean CWBS scores of the 208 families and subsets of these families at the pretest and various posttests showed a statistically significant decrease in the CWBS scores, indicating decreased risk factors. The longer the duration of services, the greater the statistically significant risk reduction. The results support the conclusion that the families who participate in empowerment-oriented community case management, with the option to extend service duration to resolve or ameliorate chronic family problems, have experienced effective strengthening in family functioning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Carbon isotopically based estimates of CO2 levels have been generated from a record of the photosynthetic fractionation of 13C (epsilon p) in a central equatorial Pacific sediment core that spans the last ~255 ka. Contents of 13C in phytoplanktonic biomass were determined by analysis of C37 alkadienones. These compounds are exclusive products of Prymnesiophyte algae which at present grow most abundantly at depths of 70-90 m in the central equatorial Pacific. A record of the isotopic compostion of dissolved CO2 was constructed from isotopic analyses of the planktonic foraminifera Neogloboquadrina dutertrei, which calcifies at 70-90 m in the same region. Values of epsilon p, derived by comparison of the organic and inorganic delta values, were transformed to yield concentrations of dissolved CO2 (c e) based on a new, site-specific calibration of the relationship between epsilon p and c e. The calibration was based on reassessment of existing epsilon p versus c e data, which support a physiologically based model in which epsilon p is inversely related to c e. Values of PCO2, the partial pressure of CO2 that would be in equilibrium with the estimated concentrations of dissolved CO2, were calculated using Henry's law and the temperature determined from the alkenone-unsaturation index UK 37. Uncertainties in these values arise mainly from uncertainties about the appropriateness (particularly over time) of the site-specific relationship between epsilon p and 1/c e. These are discussed in detail and it is concluded that the observed record of epsilon p most probably reflects significant variations in Delta pCO2, the ocean-atmosphere disequilibrium, which appears to have ranged from ~110 µatm during glacial intervals (ocean > atmosphere) to ~60 µatm during interglacials. Fluxes of CO2 to the atmosphere would thus have been significantly larger during glacial intervals. If this were characteristic of large areas of the equatorial Pacific, then greater glacial sinks for the equatorially evaded CO2 must have existed elsewhere. Statistical analysis of air-sea pCO2 differences and other parameters revealed significant (p < 0.01) inverse correlations of Delta pCO2 with sea surface temperature and with the mass accumulation rate of opal. The former suggests response to the strength of upwelling, the latter may indicate either drawdown of CO2 by siliceous phytoplankton or variation of [CO2]/[Si(OH)4] ratios in upwelling waters.