933 resultados para DIRECTED PERCOLATION
Resumo:
Thin films are developed by dispersing carbon black nanoparticles and carbon nanotubes (CNTs) in an epoxy polymer. The films show a large variation in electrical resistance when subjected to quasi-static and dynamic mechanical loading. This phenomenon is attributed to the change in the band-gap of the CNTs due to the applied strain, and also to the change in the volume fraction of the constituent phases in the percolation network. Under quasi-static loading, the films show a nonlinear response. This nonlinearity in the response of the films is primarily attributed to the pre-yield softening of the epoxy polymer. The electrical resistance of the films is found to be strongly dependent on the magnitude and frequency of the applied dynamic strain, induced by a piezoelectric substrate. Interestingly, the resistance variation is found to be a linear function of frequency and dynamic strain. Samples with a small concentration of just 0.57% of CNT show a sensitivity as high as 2.5% MPa-1 for static mechanical loading. A mathematical model based on Bruggeman's effective medium theory is developed to better understand the experimental results. Dynamic mechanical loading experiments reveal a sensitivity as high as 0.007% Hz(-1) at a constant small-amplitude vibration and up to 0.13%/mu-strain at 0-500 Hz vibration. Potential applications of such thin films include highly sensitive strain sensors, accelerometers, artificial neural networks, artificial skin and polymer electronics.
Resumo:
RecJ exonuclease plays crucial roles in several DNA repair and recombination pathways, and its ubiquity in bacterial species points to its ancient origin and vital cellular function. RecJ exonuclease from Haemophilus influenzae is a 575-amino-acid protein that harbors the characteristic motifs conserved among RecJ homologs. The purified protein exhibits a process 5'-3' single-stranded-DNA-specific exonuclease activity. The exonuclease activity of H. influenzae RecJ (HiRecJ) was supported by Mg2+ or Mn2+ and inhibited by Cd2+ suggesting a different mode of metal binding in HiRecJ as compared to Escherichia coli RecJ (EcoRecJ). Site-directed mutagenesis of highly conserved residues in HiRecJ abolished enzymatic activity. Interestingly, substitution of alanine for aspartate 77 resulted in a catalytically inactive enzyme that bound to DNA with a significantly higher affinity as compared to the wild-type enzyme. Noticeably, steady-state kinetic studies showed that H. influenzae single-stranded DNA-binding protein (HiSSB) increased the affinity of HiRecJ for single-stranded DNA and stimulated its exonuclease activity. HiSSB, whose C-terminal tail had been deleted, failed to enhance RecJ exonuclease activity. More importantly, HiRecJ was found to directly associate with its cognate single-stranded DNA-binding protein (SSB), as demonstrated by various in vitro assays, Interaction studies carried out with the truncated variants of HiRecJ and HiSSB revealed that the two proteins interact via the C-terminus of SSB protein and the core-catalytic domain of RecJ. Taken together, these results emphasize direct interactio between RecJ and SSB, which confers functional cooperativity to these two proteins. In addition, these results implicate SSB as being involved in the recruitment of RecJ to DNA and provide insights into the interplay between these proteins in repair and recombination pathways.
Resumo:
The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. This paper describes a near-optimal two-step algorithm that constructs the Reeb graph of a Morse function defined over manifolds in any dimension. The algorithm first identifies the critical points of the input manifold, and then connects these critical points in the second step to obtain the Reeb graph. A simplification mechanism based on topological persistence aids in the removal of noise and unimportant features. A radial layout scheme results in a feature-directed drawing of the Reeb graph. Experimental results demonstrate the efficiency of the Reeb graph construction in practice and its applications.
Resumo:
Eleven new human polyomaviruses have been recently discovered, yet for most of these viruses, little is known of their biology and clinical impact. Rolling circle amplification (RCA) is an ideal method for the amplification of the circular polyomavirus genome due to its high fidelity amplification of circular DNA. In this study, a modified RCA method was developed to selectively amplify a range of polyomavirus genomes. Initial evaluation showed a multiplexed temperature-graded reaction profile gave the best yield and sensitivity in amplifying BK polyomavirus in a background of human DNA, with up to 1 × 10(8)-fold increases in viral genomes from as little as 10 genome copies per reaction. Furthermore, the method proved to be more sensitive and provided a 200-fold greater yield than that of random hexamers based standard RCA. Application of the method to other novel human polyomaviruses showed successful amplification of TSPyV, HPyV6, HPyV7, and STLPyV from low-viral load positive clinical samples, with viral genome enrichment ranging from 1 × 10(8) up to 1 × 10(10). This directed RCA method can be applied to selectively amplify other low-copy polyomaviral genomes from a background of competing non-specific DNA, and is a useful tool in further research into the rapidly expanding Polyomaviridae family.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.
Resumo:
Miniaturized analytical devices, such as heated nebulizer (HN) microchips studied in this work, are of increasing interest owing to benefits like faster operation, better performance, and lower cost relative to conventional systems. HN microchips are microfabricated devices that vaporize liquid and mix it with gas. They are used with low liquid flow rates, typically a few µL/min, and have previously been utilized as ion sources for mass spectrometry (MS). Conventional ion sources are seldom feasible at such low flow rates. In this work HN chips were developed further and new applications were introduced. First, a new method for thermal and fluidic characterization of the HN microchips was developed and used to study the chips. Thermal behavior of the chips was also studied by temperature measurements and infrared imaging. An HN chip was applied to the analysis of crude oil – an extremely complex sample – by microchip atmospheric pressure photoionization (APPI) high resolution mass spectrometry. With the chip, the sample flow rate could be reduced significantly without loss of performance and with greatly reduced contamination of the MS instrument. Thanks to its suitability to high temperature, microchip APPI provided efficient vaporization of nonvolatile compounds in crude oil. The first microchip version of sonic spray ionization (SSI) was presented. Ionization was achieved by applying only high (sonic) speed nebulizer gas to an HN microchip. SSI significantly broadens the range of analytes ionizable with the HN chips, from small stable molecules to labile biomolecules. The analytical performance of the microchip SSI source was confirmed to be acceptable. The HN microchips were also used to connect gas chromatography (GC) and capillary liquid chromatography (LC) to MS, using APPI for ionization. Microchip APPI allows efficient ionization of both polar and nonpolar compounds whereas with the most popular electrospray ionization (ESI) only polar and ionic molecules are ionized efficiently. The combination of GC with MS showed that, with HN microchips, GCs can easily be used with MS instruments designed for LC-MS. The presented analytical methods showed good performance. The first integrated LC–HN microchip was developed and presented. In a single microdevice, there were structures for a packed LC column and a heated nebulizer. Nonpolar and polar analytes were efficiently ionized by APPI. Ionization of nonpolar and polar analytes is not possible with previously presented chips for LC–MS since they rely on ESI. Preliminary quantitative performance of the new chip was evaluated and the chip was also demonstrated with optical detection. A new ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), was presented. The DAPPI technique is based on an HN microchip providing desorption of analytes from a surface. Photons from a photoionization lamp ionize the analytes via gas-phase chemical reactions, and the ions are directed into an MS. Rapid analysis of pharmaceuticals from tablets was successfully demonstrated as an application of DAPPI.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
This study extends understanding of consumers' decisions to adopt transformative services delivered via technology. It incorporates competitive effects into the model of goal-directed behavior which, in keeping with the majority of consumer decision making models, neglects to explicitly account for competition. A goal-level operationalization of competition, incorporating both direct and indirect competition, is proposed. A national web-based survey collected data from 431 respondents about their decisions to adopt mental health services delivered via mobile phone. The findings show that the extent to which consumers perceived using these transformative services to be more instrumental to achieving their goals than competition had the greatest impact on their adoption decisions. This finding builds on the limited empirical evidence for the inclusion of competitive effects to more fully explain consumers' decisions to adopt technology-based and other services. It also provides support for a broader operationalization of competition with respect to consumers' personal goals.
Resumo:
"Radiodiskurssin kontekstualisointi prosodisin keinoin. Esimerkkinä viisi suurta ranskalaista 1900-luvun filosofia" Väitöskirja käsittelee puheen kontekstualisointia prosodisin keinoin. Toisin sanottuna työssä käsitellään sitä, miten puheen prosodiset piirteet (kuten sävelkulku, intensiteetti, tauot, kesto ja rytmi) ohjaavat puheen tulkintaa vanhastaan enemmän tutkittujen sana- ja lausemerkitysten ohella. Työssä keskitytään seitsemään prosodisesti merkittyyn kuvioon, jotka koostuvat yhden tai usean parametrin silmiinpistävistä muutoksista. Ilmiöitä käsitellään sekä niiden akustisten muotojen että tyypillisten esiintymisyhteyksien ja diskursiivisten tehtävien näkökulmasta. Aineisto koostuu radio-ohjelmista, joissa puhuu viisi suurta ranskalaista 1900-luvun filosofia: Gaston Bachelard, Albert Camus, Michel Foucault, Maurice Merleau-Ponty ja Jean-Paul Sartre. Ohjelmat on lähetetty eri radiokanavilla Ranskassa vuosina 1948–1973. Väitöskirjan tulokset osoittavat, että prosodisesti merkityt kuviot ovat moniulotteisia puheen ilmiöitä, joilla on keskeinen rooli sanotun kontekstualisoinnissa: ne voivat esimerkiksi nostaa tai laskea sanotun informaatioarvoa, ilmaista puhujan voimakasta tai heikkoa sitoutumista sanomaansa, ilmaista rakenteellisen kokonaisuuden jatkumista tai päättymistä, jne. Väitöskirja sisältää myös kontrastiivisia osia, joissa ilmiöitä verrataan erääseen klassisessa pianomusiikissa esiintyvään melodiseen kuvioon sekä erääseen suomen kielen prosodiseen ilmiöön. Tulokset viittaavat siihen, että tietynlaista melodista kuviota käytetään samankaltaisena jäsentämiskeinona sekä puheessa että klassisessa musiikissa. Lisäksi tulokset antavat viitteitä siitä, että tiettyjä melodisia muotoja käytetään samankaltaisten implikaatioiden luomiseen kahdessa niinkin erilaisessa kielessä kuin suomessa ja ranskassa. Yksi väitöskirjan osa käsittelee pisteen ja pilkun prosodista merkitsemistä puheessa. Tulosten mukaan pisteellä ja pilkulla on kummallakin oma suullinen prototyyppinsä: piste merkitään tyypillisesti sävelkulun laskulla ja tauolla, ja pilkku puolestaan sävelkulun nousulla ja tauolla. Merkittävimmät tulokset koskevat kuitenkin tapauksia, joissa välimerkki tulkitaan prosodisesti epätyypillisellä tavalla: sekä pisteellä että pilkulla vaikuttaisi olevan useita eri suullisia vastaavuuksia, ja välimerkkien tehtävät voivat muotoutua hyvin erilaisiksi niiden prosodisesta tulkinnasta riippuen.
Resumo:
The imperative for Indigenous education in Australia is influenced by national political, social and economic discourses as Australian education systems continue to grapple with an agreed aspiration of full participation for Aboriginal and Torres Strait Islander students. Innovations within and policies guiding our education systems are often driven by agendas of reconciliation, equity, equality in participation and social justice. In this paper, we discuss key themes that emerged from a recent Australian Office for Learning and Teaching (OLT) research project which investigated ways in which preservice teachers from one Australian university embedded Indigenous knowledges (IK) on teaching practicum . Using a phenomenological approach, the case involved 25 preservice teacher and 23 practicum supervisor participants, over a 30 month investigation. Attention was directed to the nature of subjective (lived) experiences of participants in these pedagogical negotiations and thus preservice and supervising teacher voice was actively sought in naming and analysing these experiences. Findings revealed that change, knowledge, help and affirmation were key themes for shaping discourses around Indigenous knowledges and perspectives in the Australian curriculum and defined the nature of the pedagogical relationships between novice and experienced teachers. We focus particularly on the need for change and affirmation by preservice teachers and their teaching practicum supervisors as they developed their pedagogical relationships whilst embedding Indigenous knowledges in learning and teaching.
Resumo:
This study sets out to provide new information about the interaction between abstract religious ideas and actual acts of violence in the early crusading movement. The sources are asked, whether such a concept as religious violence can be sorted out as an independent or distinguishable source of aggression at the moment of actual bloodshed. The analysis concentrates on the practitioners of sacred violence, crusaders and their mental processing of the use of violence, the concept of the violent act, and the set of values and attitudes defining this concept. The scope of the study, the early crusade movement, covers the period from late 1080 s to the crusader conquest of Jerusalem in 15 July 1099. The research has been carried out by contextual reading of relevant sources. Eyewitness reports will be compared with texts that were produced by ecclesiastics in Europe. Critical reading of the texts reveals both connecting ideas and interesting differences between them. The sources share a positive attitude towards crusading, and have principally been written to propagate the crusade institution and find new recruits. The emphasis of the study is on the interpretation of images: the sources are not asked what really happened in chronological order, but what the crusader understanding of the reality was like. Fictional material can be even more crucial for the understanding of the crusading mentality. Crusader sources from around the turn of the twelfth century accept violent encounters with non-Christians on the grounds of external hostility directed towards the Christian community. The enemies of Christendom can be identified with either non-Christians living outside the Christian society (Muslims), non-Christians living within the Christian society (Jews) or Christian heretics. Western Christians are described as both victims and avengers of the surrounding forces of diabolical evil. Although the ideal of universal Christianity and gradual eradication of the non-Christian is present, the practical means of achieving a united Christendom are not discussed. The objective of crusader violence was thus entirely Christian: the punishment of the wicked and the restoration of Christian morals and the divine order. Meanwhile, the means used to achieve these objectives were not. Given the scarcity of written regulations concerning the use of force in bello, perceptions concerning the practical use of violence were drawn from a multitude of notions comprising an adaptable network of secular and ecclesiastical, pre-Christian and Christian traditions. Though essentially ideological and often religious in character, the early crusader concept of the practise of violence was not exclusively rooted in Christian thought. The main conclusion of the study is that there existed a definable crusader ideology of the use of force by 1100. The crusader image of violence involved several levels of thought. Predominantly, violence indicates a means of achieving higher spiritual rewards; eternal salvation and immortal glory.
Resumo:
Attention is directed at land application of piggery effluent (containing urine, faeces, water, and wasted feed) as a potential source of water resource contamination with phosphorus (P). This paper summarises P-related properties of soil from 0-0.05 m depth at 11 piggery effluent application sites, in order to explore the impact that effluent application has had on the potential for run-off transport of P. The sites investigated were situated on Alfisol, Mollisol, Vertisol, and Spodosol soils in areas that received effluent for 1.5-30 years (estimated effluent-P applications of 100-310000 kg P/ha in total). Total (PT), bicarbonate extractable (PB), and soluble P forms were determined for the soil (0-0.05 m) at paired effluent and no-effluent sites, as well as texture, oxalate-extractable Fe and Al, organic carbon, and pH. All forms of soil P at 0-0.05 m depth increased with effluent application (PB at effluent sites was 1.7-15 times that at no-effluent sites) at 10 of the 11 sites. Increases in PB were strongly related to net P applications (regression analysis of log values for 7 sites with complete data sets: 82.6 % of variance accounted for, p <0.01). Effluent irrigation tended to increase the proportion of soil PT in dilute CaCl2-extractable forms (PTC: effluent average 2.0 %; no-effluent average 0.6%). The proportion of PTC in non-molybdate reactive forms (centrifuged supernatant) decreased (no-effluent average, 46.4 %; effluent average, 13.7 %). Anaerobic lagoon effluent did not reliably acidify soil, since no consistent relationship was observed for pH with effluent application. Soil organic carbon was increased in most of the effluent areas relative to the no-effluent areas. The four effluent areas where organic carbon was reduced had undergone intensive cultivation and cropping. Current effluent management at many of the piggeries failed to maximise the potential for waste P recapture. Ten of the case-study effluent application areas have received effluent-P in excess of crop uptake. While this may not represent a significant risk of leaching where sorption retains P, it has increased the risk of transport of P by run-off. Where such sites are close to surface water, run-off P loads should be managed.
Resumo:
Dry-season weight loss in grazing cattle in northern Australia has been attenuated using a number of strategies (Hunter and Vercoe, 1987, Sillence et al. 1993, Gazzola and Hunter, 1999). Furthermore, the potential to improve efficiency of feed utilisation (and thus, dry-season performance) in ruminants through conventional modulation of the insulin-like growth factor (IGF) axis (Oddy and Owens, 1997, Hill et al., 1999) and through immunomodulation of the IGF axis (Hill et al., 1998a,b) has been demonstrated. The present study investigated the use of a vaccine directed against IGFBP-1 in Brahman steers which underwent a period of nutritional restriction followed by a return to wet-season grazing.
Resumo:
Formulae for the generating functions for hypergraphs, dihypergraphs, oriented hypergraphs, selfcomplementary directed hypergraphs and self complementary hypergraphs are presented here.