825 resultados para Computer systems organization: general-emerging technologies
Resumo:
In this work, we evaluate the benefits of using Grids with multiple batch systems to improve the performance of multi-component and parameter sweep parallel applications by reduction in queue waiting times. Using different job traces of different loads, job distributions and queue waiting times corresponding to three different queuing policies(FCFS, conservative and EASY backfilling), we conducted a large number of experiments using simulators of two important classes of applications. The first simulator models Community Climate System Model (CCSM), a prominent multi-component application and the second simulator models parameter sweep applications. We compare the performance of the applications when executed on multiple batch systems and on a single batch system for different system and application configurations. We show that there are a large number of configurations for which application execution using multiple batch systems can give improved performance over execution on a single system.
Resumo:
Digital and interactive technologies are becoming increasingly embedded in everyday lives of people around the world. Application of technologies such as real-time, context-aware, and interactive technologies; augmented and immersive realities; social media; and location-based services has been particularly evident in urban environments where technological and sociocultural infrastructures enable easier deployment and adoption as compared to non-urban areas. There has been growing consumer demand for new forms of experiences and services enabled through these emerging technologies. We call this ambient media, as the media is embedded in the natural human living environment. This workshop focuses on ambient media services, applications, and technologies that promote people’s engagement in creating and recreating liveliness in urban environments, particularly through arts, culture, and gastronomic experiences. The RelCi workshop series is organized in cooperation with the Queensland University of Technology (QUT), in particular the Urban Informatics Lab and the Tampere University of Technology (TUT), in particular the Entertainment and Media Management (EMMi) Lab. The workshop runs under the umbrella of the International Ambient Media Association (AMEA) (http://www.ambientmediaassociation.org), which is hosting the international open access journal entitled “International Journal on Information Systems and Management in Creative eMedia”, and the international open access series “International Series on Information Systems and Management in Creative eMedia” (see http://www.tut.fi/emmi/Journal). The RelCi workshop took place for the first time in 2012 in conjunction with ICME 2012 in Melbourne, Autralia; and this year’s edition took place in conjunction with INTERACT 2013 in Cape Town, South Africa. Besides, the International Ambient Media Association (AMEA) organizes the Semantic Ambient Media (SAME) workshop series, which took place in 2008 in conjunction with ACM Multimedia 2008 in Vancouver, Canada; in 2009 in conjunction with AmI 2009 in Salzburg, Austria; in 2010 in conjunction with AmI 2010 in Malaga, Spain; in 2011 in conjunction with Communities and Technologies 2011 in Brisbane, Australia; in 2012 in conjunction with Pervasive 2012 in Newcastle, UK; and in 2013 in conjunction with C&T 2013 in Munich, Germany.
Resumo:
The possibility of applying two approximate methods for determining the salient features of response of undamped non-linear spring mass systems subjected to a step input, is examined. The results obtained on the basis of these approximate methods are compared with the exact results that are available for some particular types of spring characteristics. The extension of the approximate methods for non-linear systems with general polynomial restoring force characteristics is indicated.
Resumo:
The growth of the information economy has been stellar in the last decade. General-purpose technologies such as the computer and the Internet have promoted productivity growth in a large number of industries. The effect on telecommunications, media and technology industries has been particularly strong. These industries include mobile telecommunications, printing and publishing, broadcasting, software, hardware and Internet services. There have been large structural changes, which have led to new questions on business strategies, regulation and policy. This thesis focuses on four such questions and answers them by extending the theoretical literature on platforms. The questions (with short answers) are: (i) Do we need to regulate how Internet service providers discriminate between content providers? (Yes.) (ii) What are the welfare effects of allowing consumers to pay to remove advertisements from advertisement-supported products?(Ambiguous, but those watching ads are worse off.) (iii) Why are some markets characterized by open platforms, extendable by third parties, and some by closed platforms, which are not extendable? (It is a trade-off between intensified competition for consumers and benefits from third parties) (iv) Do private platform providers allow third parties to access their platform when it is socially desirable? (No.)
Resumo:
A parallel matrix multiplication algorithm is presented, and studies of its performance and estimation are discussed. The algorithm is implemented on a network of transputers connected in a ring topology. An efficient scheme for partitioning the input matrices is introduced which enables overlapping computation with communication. This makes the algorithm achieve near-ideal speed-up for reasonably large matrices. Analytical expressions for the execution time of the algorithm have been derived by analysing its computation and communication characteristics. These expressions are validated by comparing the theoretical results of the performance with the experimental values obtained on a four-transputer network for both square and irregular matrices. The analytical model is also used to estimate the performance of the algorithm for a varying number of transputers and varying problem sizes. Although the algorithm is implemented on transputers, the methodology and the partitioning scheme presented in this paper are quite general and can be implemented on other processors which have the capability of overlapping computation with communication. The equations for performance prediction can also be extended to other multiprocessor systems.
Resumo:
Management of large projects, especially the ones in which a major component of R&D is involved and those requiring knowledge from diverse specialised and sophisticated fields, may be classified as semi-structured problems. In these problems, there is some knowledge about the nature of the work involved, but there are also uncertainties associated with emerging technologies. In order to draw up a plan and schedule of activities of such a large and complex project, the project manager is faced with a host of complex decisions that he has to take, such as, when to start an activity, for how long the activity is likely to continue, etc. An Intelligent Decision Support System (IDSS) which aids the manager in decision making and drawing up a feasible schedule of activities while taking into consideration the constraints of resources and time, will have a considerable impact on the efficient management of the project. This report discusses the design of an IDSS that helps in project planning phase through the scheduling phase. The IDSS uses a new project scheduling tool, the Project Influence Graph (PIG).
Resumo:
Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems.A sensor network operating system is a kind of embedded operating system, but unlike a typical embedded operating system, sensor network operatin g system may not be real time, and is constrained by memory and energy constraints. Most sensor network operating systems are based on event-driven approach. Event-driven approach is efficient in terms of time and space.Also this approach does not require a separate stack for each execution context. But using this model, it is difficult to implement long running tasks, like cryptographic operations. A thread based computation requires a separate stack for each execution context, and is less efficient in terms of time and space. In this paper, we propose a thread based execution model that uses only a fixed number of stacks. In this execution model, the number of stacks at each priority level are fixed. It minimizes the stack requirement for multi-threading environment and at the same time provides ease of programming. We give an implementation of this model in Contiki OS by separating thread implementation from protothread implementation completely. We have tested our OS by implementing a clock synchronization protocol using it.
Resumo:
Use of some new planes such as the R-x, R2-x (where R represents in the n-dimensional phase space, the radius vector from the origin to any point on the trajectory described by the system) is suggested for analysis of nonlinear systems of any kind. The stability conditions in these planes are given. For easy understanding of the method, the transformation from the phase plane to the R-x, R2-x planes is brought out for second-order systems. In general, while these planes serve as useful as the phase plane, they have proved to be simpler in determining quickly the general behavior of certain classes of second-order nonlinear systems. A chart and a simple formula are suggested to evaluate time easily from the R-x and R2-x trajectories, respectively. A means of solving higher-order nonlinear systems is also illustrated. Finally, a comparative study of the trajectories near singular points on the phase plane and on the new planes is made.
Resumo:
The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
Coral reefs exist in warm, clear, and relatively shallow marine waters worldwide. These complex assemblages of marine organisms are unique, in that they support highly diverse, luxuriant, and essentially self-sustaining ecosystems in otherwise nutrient-poor and unproductive waters. Coral reefs are highly valued for their great beauty and for their contribution to marine productivity. Coral reefs are favorite destinations for recreational diving and snorkeling, as well as commercial and recreational fishing activities. The Florida Keys reef tract draws an estimated 2 million tourists each year, contributing nearly $800 million to the economy. However, these reef systems represent a very delicate ecological balance, and can be easily damaged and degraded by direct or indirect human contact. Indirect impacts from human activity occurs in a number of different forms, including runoff of sediments, nutrients, and other pollutants associated with forest harvesting, agricultural practices, urbanization, coastal construction, and industrial activities. Direct impacts occur through overfishing and other destructive fishing practices, mining of corals, and overuse of many reef areas, including damage from souvenir collection, boat anchoring, and diver contact. In order to protect and manage coral reefs within U.S. territorial waters, the National Oceanic and Atmospheric Administration (NOAA) of the U.S. Department of Commerce has been directed to establish and maintain a system of national marine sanctuaries and reserves, and to monitor the condition of corals and other marine organisms within these areas. To help carry out this mandate the NOAA Coastal Services Center convened a workshop in September, 1996, to identify current and emerging sensor technologies, including satellite, airborne, and underwater systems with potential application for detecting and monitoring corals. For reef systems occurring within depths of 10 meters or less (Figure 1), mapping location and monitoring the condition of corals can be accomplished through use of aerial photography combined with diver surveys. However, corals can exist in depths greater than 90 meters (Figure 2), well below the limits of traditional optical imaging systems such as aerial or surface photography or videography. Although specialized scuba systems can allow diving to these depths, the thousands of square kilometers included within these management areas make diver surveys for deeper coral monitoring impractical. For these reasons, NOAA is investigating satellite and airborne sensor systems, as well as technologies which can facilitate the location, mapping, and monitoring of corals in deeper waters. The following systems were discussed as having potential application for detecting, mapping, and assessing the condition of corals. However, no single system is capable of accomplishing all three of these objectives under all depths and conditions within which corals exist. Systems were evaluated for their capabilities, including advantages and disadvantages, relative to their ability to detect and discriminate corals under a variety of conditions. (PDF contains 55 pages)
Resumo:
Red hind (Epinephelus guttatus) have been overfished in the Caribbean and were included with seven other regional grouper species deemed vulnerable to risk of extinction. The Puerto Rico Department of Natural and Environmental Resources desired to map spawning red hind aggregations within commonwealth waters as part of their resource management program for the species. Mobile hydroacoustic surveys were conducted over 3-day periods in 2002 and 2003, indexed to the full moon phase in February or March when red hind were known to aggregate. Four vessels concurrently sampled the southwest, south, and southeast coasts of Puerto Rico in 2002. In 2003, three vessels conducted complementary surveys of the northwest, north, and northeast coasts of the island, completing a circuit of the coastal shelf-spawning habitat. These surveys indicated that red hind spawning aggregations were prevalent along the south and west coasts, and sparse along the north coast during the survey periods. Highest spawning red hind concentrations were observed in three areas offshore of the west coast of Puerto Rico, around Mona and Desecheo islands (20,443 and 10,559 fish/km2, respectively) and in the Bajo de Cico seasonal closed area (4,544 fish/km2). Following both 2002 and 2003 surveys, a series of controlled acoustic measurements of known local fish species in net pens were conducted to assess the mean target strength (acoustic backscatter) of each group. Ten species of fish were measured, including red hind (E. guttatus), coney (E. fulvus), white grunt (Haemulon plumieri), pluma (Calamus pennatula), blue tang (Acanthurus coeruleus), squirrel fish (Holocentrus spp.), black durgeon (Melichtyhs niger), ocean file fish (Canthidermis sufflamen), ocean surgeon fish (Acanthurus bahianus), and butter grouper (Mycteroperca spp.). In general, the mean target strength results from the caged fish experiments were in agreement with published target strength length relationships, with the exception of white grunt and pluma.
Resumo:
The Alliance for Coastal Technologies (ACT) Workshop on Trace Metal Sensors for Coastal Monitoring was convened April 11-13, 2005 at the Embassy Suites in Seaside, California with partnership from Moss Landing Marine Laboratories (MLML) and the Monterey Bay Aquarium Research Institute (MBARI). Trace metals play many important roles in marine ecosystems. Due to their extreme toxicity, the effects of copper, cadmium and certain organo-metallinc compounds (such as tributyltin and methylmercury) have received much attention. Lately, the sublethal effects of metals on phytoplankton biochemistry, and in some cases the expression of neurotoxins (Domoic acid), have been shown to be important environmental forcing functions determining the composition and gene expression in some groups. More recently the role of iron in controlling phytoplankton growth has led to an understanding of trace metal limitation in coastal systems. Although metals play an important role at many different levels, few technologies exist to provide rapid assessment of metal concentrations or metal speciation in the coastal zone where metal-induced toxicity or potential stimulation of harmful algal blooms, can have major economic impacts. This workshop focused on the state of on-site and in situ trace element detection technologies, in terms of what is currently working well and what is needed to effectively inform coastal zone managers, as well as guide adaptive scientific sampling of the coastal zone. Specifically the goals of this workshop were to: 1) summarize current regional requirements and future targets for metal monitoring in freshwater, estuarine and coastal environments; 2) evaluate the current status of metal sensors and possibilities for leveraging emerging technologies for expanding detection limits and target elements; and 3) help identify critical steps needed for and limits to operational deployment of metal sensors as part of routine water quality monitoring efforts. Following a series of breakout group discussions and overview talks on metal monitoring regulatory issues, analytical techniques and market requirements, workshop participants made several recommendations for steps needed to foster development of in situ metal monitoring capacities: 1. Increase scientific and public awareness of metals of environmental and biological concern and their impacts in aquatic environments. Inform scientific and public communities regarding actual levels of trace metals in natural and perturbed systems. 2. Identify multiple use applications (e.g., industrial waste steam and drinking water quality monitoring) to support investments in metal sensor development. (pdf contains 27 pages)
Resumo:
ICECCS 2010