900 resultados para write


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This was a peer-reviewed event that took place at the DiGRA-FDG conference in August 2016. While it has a paper component (the attached proposal), the output was a demonstration of games rather than a conference paper. As such, this entry should be considered an Event or Exhibition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Richard Aldington’s city poems in the latter part of his 1915 collection Images are concerned with the masses who inhabit the modern city. Aldington is at pains to stress his distinction from those he perceives as an increasingly homogenized crowd. This paper examines the literary, linguistic and rhetorical strategies by which Aldington ‘others’ the masses, and sets them in the context of contemporary studies of the crowd, focusing on the work of Gustave Le Bon and C. F. G. Masterman. Aldington’s poetry is a product of the environment he sees as unsatisfactory, but he searches for solutions in a range of literary traditions which write the city.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation covers two separate topics in statistical physics. The first part of the dissertation focuses on computational methods of obtaining the free energies (or partition functions) of crystalline solids. We describe a method to compute the Helmholtz free energy of a crystalline solid by direct evaluation of the partition function. In the many-dimensional conformation space of all possible arrangements of N particles inside a periodic box, the energy landscape consists of localized islands corresponding to different solid phases. Calculating the partition function for a specific phase involves integrating over the corresponding island. Introducing a natural order parameter that quantifies the net displacement of particles from lattices sites, we write the partition function in terms of a one-dimensional integral along the order parameter, and evaluate this integral using umbrella sampling. We validate the method by computing free energies of both face-centered cubic (FCC) and hexagonal close-packed (HCP) hard sphere crystals with a precision of $10^{-5}k_BT$ per particle. In developing the numerical method, we find several scaling properties of crystalline solids in the thermodynamic limit. Using these scaling properties, we derive an explicit asymptotic formula for the free energy per particle in the thermodynamic limit. In addition, we describe several changes of coordinates that can be used to separate internal degrees of freedom from external, translational degrees of freedom. The second part of the dissertation focuses on engineering idealized physical devices that work as Maxwell's demon. We describe two autonomous mechanical devices that extract energy from a single heat bath and convert it into work, while writing information onto memory registers. Additionally, both devices can operate as Landauer's eraser, namely they can erase information from a memory register, while energy is dissipated into the heat bath. The phase diagrams and the efficiencies of the two models are solved and analyzed. These two models provide concrete physical illustrations of the thermodynamic consequences of information processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is part of the Project “Adaptive thinking and flexible computation: Critical issues”. In this paper we discuss different perspectives of flexibility and adaptive thinking in literature. We also discuss the idea of proceptual thinking and how this idea is important in our perspective of adaptive thinking. The paper analyses a situation developed with a first grade classroom and its teacher named the day number. It is a daily activity at the beginning of the school day. It consists on to look for the date number and to think about different ways of writing it using the four arithmetic operations. The analyzed activity was developed on March 19, so the challenge was to write 19 in several ways. The data show the pupils’ enthusiasm and their efforts to find different ways of writing the number. Some used large numbers and division, which they were just starting to learn. The pupils presented symbolic expressions of 19, decomposing and recomposing it in a flexible manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The workshop took place on 16-17 January in Utrecht, with Seventy experts from eight European countries in attendance. The workshop was structured in six sessions: usage statistics research paper metadata exchanging information author identification Open Archives Initiative eTheses Following the workshop, the discussion groups were asked to continue their collaboration and to produce a report for circulation to all participants. The results can be downloaded below. The recommendations contained in the reports above have been reviewed by the Knowledge Exchange partner organisations and formed the basis for new proposals and the next steps in Knowledge Exchange work with institutional repositories. Institutional Repository Workshop - Next steps During April and May 2007 Knowledge Exchange had expert reviewers from the partner organisations go though the workshop strand reports and make their recommendations about the best way to move forward, to set priorities, and find possibilities for furthering the institutional repository cause. The KE partner representatives reviewed the reviews and consulted with their partner organisation management to get an indication of support and funding for the latest ideas and proposals, as follows: Pragmatic interoperability During a review meeting at JISC offices in London on 31 May, the expert reviewers and the KE partner representatives agreed that ‘pragmatic interoperability' is the primary area of interest. It was also agreed that the most relevant and beneficial choice for a Knowledge Exchange approach would be to aim for CRIS-OAR interoperability as a step towards integrated services. Within this context, interlinked joint projects could be undertaken by the partner organisations regarding the areas that most interested them. Interlinked projects The proposed Knowledge Exchange activities involve interlinked joint projects on metadata, persistent author identifiers, and eTheses which are intended to connect to and build on projects such as ISPI, Jisc NAMES and the Digital Author Identifier (DAI) developed by SURF. It is important to stress that the projects are not intended to overlap, but rather to supplement the DRIVER 2 (EU project) approaches. Focus on CRIS and OAR It is believed that the focus on practical interoperability between Current Research Information Systems and Open Access Repository systems will be of genuine benefit to research scientists, research administrators and librarian communities in the Knowledge Exchange countries; accommodating the specific needs of each group. Timing June 2007: Write the draft proposal by KE Working Group members July 2007: Final proposal to be sent to partner organisations by KE Group August 2007: Decision by Knowledge Exchange partner organisations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spelling is an important literacy skill, and learning to spell is an important component of learning to write. Learners with strong spelling skills also exhibit greater reading, vocabulary, and orthographic knowledge than those with poor spelling skills (Ehri & Rosenthal, 2007; Ehri & Wilce, 1987; Rankin, Bruning, Timme, & Katkanant, 1993). English, being a deep orthography, has inconsistent sound-to-letter correspondences (Seymour, 2005; Ziegler & Goswami, 2005). This poses a great challenge for learners in gaining spelling fluency and accuracy. The purpose of the present study is to examine cross-linguistic transfer of English vowel spellings in Spanish-speaking adult ESL learners. The research participants were 129 Spanish-speaking adult ESL learners and 104 native English-speaking GED students enrolled in a community college located in the South Atlantic region of the United States. The adult ESL participants were in classes at three different levels of English proficiency: advanced, intermediate, and beginning. An experimental English spelling test was administered to both the native English-speaking and ESL participants. In addition, the adult ESL participants took the standardized spelling tests to rank their spelling skills in both English and Spanish. The data were analyzed using robust regression and Poisson regression procedures, Mann-Whitney test, and descriptive statistics. The study found that both Spanish spelling skills and English proficiency are strong predictors of English spelling skills. Spanish spelling is also a strong predictor of level of L1-influenced transfer. More proficient Spanish spellers made significantly fewer L1-influenced spelling errors than less proficient Spanish spellers. L1-influenced transfer of spelling knowledge from Spanish to English likely occurred in three vowel targets (/ɑɪ/ spelled as ae, ai, or ay, /ɑʊ/ spelled as au, and /eɪ/ spelled as e). The ESL participants and the native English-speaking participants produced highly similar error patterns of English vowel spellings when the errors did not indicate L1-influenced transfer, which implies that the two groups might follow similar trajectories of developing English spelling skills. The findings may help guide future researchers or practitioners to modify and develop instructional spelling intervention to meet the needs of adult ESL learners and help them gain English spelling competence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório elaborado para obtenção do grau de mestre em educação pré-escolar

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The life cycle of software applications in general is very short and with extreme volatile requirements. Within these conditions programmers need development tools and techniques with an extreme level of productivity. We consider the code reuse as the most prominent approach to solve that problem. Our proposal uses the advantages provided by the Aspect-Oriented Programming in order to build a reusable framework capable to turn both programmer and application oblivious as far as data persistence is concerned, thus avoiding the need to write any line of code about that concern. Besides the benefits to productivity, the software quality increases. This paper describes the actual state of the art, identifying the main challenge to build a complete and reusable framework for Orthogonal Persistence in concurrent environments with support for transactions. The present work also includes a successfully developed prototype of that framework, capable of freeing the programmer of implementing any read or write data operations. This prototype is supported by an object oriented database and, in the future, will also use a relational database and have support for transactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is in two parts: a creative work of fiction and a critical reflection on writing from an identity of expatriation. The creative work, a novel entitled Running on Rooftops, revolves around a fictitious community of expatriates living and working in China. As a new college graduate, Anne Henry, the novel’s protagonist and narrator, decides to spend a year teaching English in China. Twelve years later, though still unsure of how to make sense of the chain of events and encounters that left her with an X-shaped scar on her knee, she nevertheless tells the story, revealing how “just a year” can be anything but. The critical reflection, entitled Writing on Rooftops, explores the nature of expatriation as it relates to identity and writing, specifically in how West-meets-East encounters and attitudes are depicted in literature. In it, I examine the challenges and benefits of writing from an identity and mindset of expatriation as illustrated in the works of Western writers who themselves experienced and wrote from viewpoints of expatriation, particularly those Western writers who wrote of expatriation in China and Southeast Asia. The primary question addressed is how expatriation influences perception and how those perceptions among Western foreigners in China and Southeast Asia have been and can be reflected in literature. In the end, I argue that expatriation can be a valuable viewpoint to write from, offering new ways of seeing and describing our world, ourselves and the connections between the two.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International audience

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de blog por los periodistas ecuatorianos es un proceso relativamente nuevo; es utilizado como una alternativa de comunicación libre y vía de escape de las presiones internas y externas que existen en el sector de medios en el país desde la aprobación de la Ley de Comunicación. Este estudio evidencia la existencia de 91 blogs divididos en tres categorías: 64 blogs personales alimentados por periodistas, 24 asociados a los grandes medios impresos en Ecuador y 3 blogs grupales dedicados a la distribución de noticias.Se analizan los lugares de procedencia de los blogs y su relación con los niveles de acceso a la Red, temas tratados, la difusión que tienen en Facebook y Twitter, los tipos de blogs (personales, adjuntos a medios y grupales) y, por último, su índice de abandono y mortandad, usando la técnica de rastreo digital de evidencias (Roger, 2009). En Ecuador los blogs no son monotemáticos; tratan varias ramas siendo el periodismo, la comunicación y la política los temas preferidos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A self-organising model of macadamia, expressed using L-Systems, was used to explore aspects of canopy management. A small set of parameters control the basic architecture of the model, with a high degree of self-organisation occurring to determine the fate and growth of buds. Light was sensed at the leaf level and used to represent vigour and accumulated basipetally. Buds also sensed light so as to provide demand in the subsequent redistribution of the vigour. Empirical relationships were derived from a set of 24 completely digitised trees after conversion to multiscale tree graphs (MTG) and analysis with the OpenAlea software library. The ability to write MTG files was embedded within the model so that various tree statistics could be exported for each run of the model. To explore the parameter space a series of runs was completed using a high-throughput computing platform. When combined with MTG generation and analysis with OpenAlea it provided a convenient way in which thousands of simulations could be explored. We allowed the model trees to develop using self-organisation and simulated cultural practices such as hedging, topping, removal of the leader and limb removal within a small representation of an orchard. The model provides insight into the impact of these practices on potential for growth and the light distribution within the canopy and to the orchard floor by coupling the model with a path-tracing program to simulate the light environment. The lessons learnt from this will be applied to other evergreen, tropical fruit and nut trees.