746 resultados para Cornelius Castoriadis
Resumo:
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1=n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal. Funding source Cancer Australia (Department of Health and Ageing) Research Grant 614217
Resumo:
Twitter is now well-established as an important platform for real-time public communication. Twitter research continues to lag behind these developments, with many studies remaining focused on individual case studies and utilizing home-grown, idiosyncratic, non-repeatable, and non-verifiable research methodologies. While the development of a full-blown “science of Twitter” may remain illusory, it is nonetheless necessary to move beyond such individual scholarship and toward the development of more comprehensive, transferable, and rigorous tools and methods for the study of Twitter on a large scale and in close to real time.
Resumo:
Our paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users.
Resumo:
The aim of this work is to develop software that is capable of back projecting primary fluence images obtained from EPID measurements through phantom and patient geometries in order to calculate 3D dose distributions. In the first instance, we aim to develop a tool for pretreatment verification in IMRT. In our approach, a Geant4 application is used to back project primary fluence values from each EPID pixel towards the source. Each beam is considered to be polyenergetic, with a spectrum obtained from Monte Carlo calculations for the LINAC in question. At each step of the ray tracing process, the energy differential fluence is corrected for attenuation and beam divergence. Subsequently, the TERMA is calculated and accumulated to an energy differential 3D TERMA distribution. This distribution is then convolved with monoenergetic point spread kernels, thus generating energy differential 3D dose distributions. The resulting dose distributions are accumulated to yield the total dose distribution, which can then be used for pre-treatment verification of IMRT plans. Preliminary results were obtained for a test EPID image comprised of 100 9 100 pixels of unity fluence. Back projection of this field into a 30 cm9 30 cm 9 30 cm water phantom was performed, with TERMA distributions obtained in approximately 10 min (running on a single core of a 3 GHz processor). Point spread kernels for monoenergetic photons in water were calculated using a separate Geant4 application. Following convolution and summation, the resulting 3D dose distribution produced familiar build-up and penumbral features. In order to validate the dose model we will use EPID images recorded without any attenuating material in the beam for a number of MLC defined square fields. The dose distributions in water will be calculated and compared to TPS predictions.
Resumo:
Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.
Resumo:
Background: Findings from the phase 3 FLEX study showed that the addition of cetuximab to cisplatin and vinorelbine significantly improved overall survival, compared with cisplatin and vinorelbine alone, in the first-line treatment of EGFR-expressing, advanced non-small-cell lung cancer (NSCLC). We investigated whether candidate biomarkers were predictive for the efficacy of chemotherapy plus cetuximab in this setting. Methods: Genomic DNA extracted from formalin-fixed paraffin-embedded (FFPE) tumour tissue of patients enrolled in the FLEX study was screened for KRAS codon 12 and 13 and EGFR kinase domain mutations with PCR-based assays. In FFPE tissue sections, EGFR copy number was assessed by dual-colour fluorescence in-situ hybridisation and PTEN expression by immunohistochemistry. Treatment outcome was investigated according to biomarker status in all available samples from patients in the intention-to-treat population. The primary endpoint in the FLEX study was overall survival. The FLEX study, which is ongoing but not recruiting participants, is registered with ClinicalTrials.gov, number NCT00148798. Findings: KRAS mutations were detected in 75 of 395 (19%) tumours and activating EGFR mutations in 64 of 436 (15%). EGFR copy number was scored as increased in 102 of 279 (37%) tumours and PTEN expression as negative in 107 of 303 (35%). Comparisons of treatment outcome between the two groups (chemotherapy plus cetuximab vs chemotherapy alone) according to biomarker status provided no indication that these biomarkers were of predictive value. Activating EGFR mutations were identified as indicators of good prognosis, with patients in both treatment groups whose tumours carried such mutations having improved survival compared with those whose tumours did not (chemotherapy plus cetuximab: median 17·5 months [95% CI 11·7-23·4] vs 8·5 months [7·1-10·8], hazard ratio [HR] 0·52 [0·32-0·84], p=0·0063; chemotherapy alone: 23·8 months [15·2-not reached] vs 10·0 months [8·7-11·0], HR 0·35 [0·21-0·59], p<0·0001). Expression of PTEN seemed to be a potential indicator of good prognosis, with patients whose tumours expressed PTEN having improved survival compared with those whose tumours did not, although this finding was not significant (chemotherapy plus cetuximab: median 11·4 months [8·6-13·6] vs 6·8 months [5·9-12·7], HR 0·80 [0·55-1·16], p=0·24; chemotherapy alone: 11·0 months [9·2-12·6] vs 9·3 months [7·6-11·9], HR 0·77 [0·54-1·10], p=0·16). Interpretation: The efficacy of chemotherapy plus cetuximab in the first-line treatment of advanced NSCLC seems to be independent of each of the biomarkers assessed. Funding: Merck KGaA. © 2011 Elsevier Ltd.
Resumo:
Since its launch in 2006, Twitter has turned from a niche service to a mass phenomenon. By the beginning of 2013, the platform claims to have more than 200 million active users, who “post over 400 million tweets per day” (Twitter, 2013). Its success is spreading globally; Twitter is now available in 33 different languages, and has significantly increased its support for languages that use non-Latin character sets. While Twitter, Inc. has occasionally changed the appearance of the service and added new features—often in reaction to users’ developing their own conventions, such as adding ‘#’ in front of important keywords to tag them—the basic idea behind the service has stayed the same: users may post short messages (tweets) of up to 140 characters and follow the updates posted by other users. This leads to the formation of complex follower networks with unidirectional as well as bidirectional connections between individuals, but also between media outlets, NGOs, and other organisations. While originally ‘microblogs’ were perceived as a new genre of online communication, of which Twitter was just one exemplar, the platform has become synonymous with microblogging in most countries. A notable exception is Sina Weibo, popular in China where Twitter is not available. Other similar platforms have been shut down (e.g., Jaiku), or are being used in slightly different ways (e.g., Tumblr), thus making Twitter a unique service within the social media landscape.
Resumo:
Twitter is used for a range of communicative purposes. These extend from personal tweets that address what used to be Twitter’s default question, “What’s happening?”, through one-on-one @reply conversations between close friends and attempts at getting the attention of celebrities and other public actors, to discussions in communities built around specific issues—and back again to broadcast-style statements from well-known individuals and brands to their potentially very large retinue of followers.
Resumo:
As the systematic investigation of Twitter as a communications platform continues, the question of developing reliable comparative metrics for the evaluation of public, communicative phenomena on Twitter becomes paramount. What is necessary here is the establishment of an accepted standard for the quantitative description of user activities on Twitter. This needs to be flexible enough in order to be applied to a wide range of communicative situations, such as the evaluation of individual users’ and groups of users’ Twitter communication strategies, the examination of communicative patterns within hashtags and other identifiable ad hoc publics on Twitter (Bruns & Burgess, 2011), and even the analysis of very large datasets of everyday interactions on the platform. By providing a framework for quantitative analysis on Twitter communication, researchers in different areas (e.g., communication studies, sociology, information systems) are enabled to adapt methodological approaches and to conduct analyses on their own. Besides general findings about communication structure on Twitter, large amounts of data might be used to better understand issues or events retrospectively, detect issues or events in an early stage, or even to predict certain real-world developments (e.g., election results; cf. Tumasjan, Sprenger, Sandner, & Welpe, 2010, for an early attempt to do so).
Resumo:
Twitter and other social media have become increasingly important tools for maintaining the relationships between fans and their idols across a range of activities, from politics and the arts to celebrity and sports culture. Twitter, Inc. itself has initiated several strategic approaches, especially to entertainment and sporting organisations; late in 2012, for example, a Twitter, Inc. delegation toured Australia in order to develop formal relationships with a number of key sporting bodies covering popular sports such as Australian Rules Football, A-League football (soccer), and V8 touring car racing, as well as to strengthen its connections with key Australian broadcasters and news organisations (Jackson & Christensen, 2012). Similarly, there has been a concerted effort between Twitter Germany and the German Bundesliga clubs and football association to coordinate the presence of German football on Twitter ahead of the 2012–2013 season: the Twitter accounts of almost all first-division teams now bear the official Twitter verification mark, and a system of ‘official’ hashtags for tweeting about individual games (combining the abbreviations of the two teams, e.g. #H96FCB) has also been instituted (Twitter auf Deutsch, 2012).
Resumo:
Over the past decade, social media have gone through a process of legitimation and official adoption, and they are now becoming embedded as part of the official communications apparatus of many commercial and public-sector organisations— in turn, providing platforms like Twitter with their own sources of legitimacy. Arguably, the demonstrated utility of social media platforms and tools in times of crisis—from civil unrest and violent crime through to natural disasters like bushfires, earthquakes, and floods—has been a crucial driver of this newfound legitimacy. In the mid-2000s, user-created content and ‘Web 2.0’ platforms were known to play a role in crisis communication; back then, the involvement of extra-institutional actors in providing and sharing information around such events involved distributed, ad hoc, or niche platforms (like Flickr), and was more likely to be framed as ‘citizen journalism’ or ‘crowdsourcing’ (see, for example, Liu, Palen, Sutton, Hughes, & Vieweg, 2008, on the then-emerging role of photo-sharing in disasters). Since then, the dramatically increased take-up of mainstream social media platforms like Facebook and Twitter means that the pool of potential participants in online crisis communication has broadened to include a much larger proportion of the general population, as well as traditional media and official emergency response organisations.
Resumo:
Each of the thirty-one contributions in this volume implicitly spells out its own answer to this question. Surprisingly perhaps even for such a highly interdisciplinary volume as this one, these answers vary considerably in their approaches, their objectives, and their underlying assumptions about the object of study. This diversity of scholarly perspectives on Twitter, barely half a decade since it first emerged as a popular platform, highlights its versatility. Beginning as a side project to a now-forgotten podcasting platform, rising to popularity as a social network service focussed around mundane communication and therefore widely lambasted as a cesspool of vanity and triviality by incredulous journalists (including technology journalists), it was later embraced by those same journalists, governments, and businesses as a crucial source of real-time information on everything from natural disasters to celebrity gossip, and from debates over sexual violence to Vatican politics.